Politics, gender, religious, national beliefs and values––no matter what your opinion or belief system is, surveillance, and national security affect us all personally. This uniform effect is almost unifying in an odd way––all of us are concerned about our personal safety on the internet and the risk that cyber threats pose on our lives. In more recent events, for example, the emphasis is on Global Health Security: How should the US prepare for global health crises such as pandemics moving forward? Moreover, another pressing issue of today’s age is citizen surveillance, a touchy subject that has been especially manifested in the popular app TikTok. With nearly 800 million daily active users, over the last several months, the China-based behemoth has been at the center of a privacy and data war with the US, culminating in threats to ban the app altogether from American soil. On the issue, US Secretary of State, Mike Pompeo stated, “It is not possible to have your personal information flow across a Chinese server without the rest of that information ending up in the hands of the Chinese Communist Party.”
Get started on your cybersecurity degree at American Military University.
During a time when surveillance and security have become central talking points among governments and households, the Digital Pioneers Network produced another critical RELOADED Series on AI Solutions for National Security in an attempt to identify answers and provide powerful recommendations on how to mend the current state of the union.
The Digital Pioneers Network brought together some of the most prominent voices to engage in a significant conversation on national security in a post-COVID world. These esteemed guests included Deborah Wince Smith, CEO of the US Council on Competitiveness, Deemah Al Yahya, Founder & CEO of WomenSpark Kingdom of Saudi Arabia, Mark Bealle, Head of Strategy and Policy at the DoDs Joint Artificial Intelligence Center (JAIC), Matthew Tarascio, Vice President of Artificial Intelligence at Lockheed Martin, Irakli Beridze, Head of the Center for Artificial Intelligence and Robotics at the UN, UNICRI, Erin Kenneally, Director of Cyber Risk Analytics at Guidewire-Cyence and Chris Novak, Director of the Verizon Threat Research.
Wince-Smith, CEO of US Council on Competitiveness began the conversation by addressing the impact of technology and security, and that, “All of these critical emerging technologies, obviously the digital revolution, biotechnology, the nanotechnology and cognitive revolution, are all really huge with military applications…and the speed at which this is occurring, this civil-military fusion, is more important than ever because of the rapidity, scale and scope of the capabilities.” Adding to Wince-Smith’s important remark, Bealle, Head of Strategy and Policy at the DoDs JAIC, stated that, “The DoD is in the midst of this digital transformation. And it’s very interesting because on the one hand, AI adoption at scale in the DoD is brand new and very difficult. On the other hand, DoD and its agencies like DARPA have been prime movers in AI technologies going back decades.”
Beridze, Head of the Center for Artificial Intelligence and Robotics at the UN, UNICRI who is directly involved in the creation of an AI toolkit for law enforcement added, “We really underline the responsible use of AI, that’s the big issue. AI used by Law enforcement should enshrine general principles and respect of human rights, justice, democracy, the rule of law, the rules of fairness, accountability and transparency.” Beridze’s point feeds into a main topic of contention for US populations––namely, addressing police brutality and ethical law enforcement.
Tarascio, Vice President of Artificial Intelligence at Lockheed Martin, emphasized AI-human interaction with the following powerful note, “Having algorithms that integrate with humans while they are training to perform a particular mission is critical to building trust. And that’s how you get that human-machine teaming to be efficient.” Moreover, it is evident that trust between humans and machines is critical to structuring a robust national security structure. Moreover, Novak stated that “When we look at the threat landscape we see a lot of activity related to social engineering attacks, a lot of that comes typically via email or text messages, and so by leveraging AI [hackers] are trying to find creative ways past various security filtering systems and onto their intended targets.” Deema Al Yahya continued the dialogue, mentioning that it is important to mobilize young people for quick adoption of digital technology. Lastly, Keanelly, Director of Cyber Risk Analytics at Guidewire-Cyence, stated that we need to find ways to leverage AI to minimize threats that decentralization of processes pose, such as remote work, on cybersecurity frameworks.
The Way Forward
In order to create a robust national security plan that functions on both a national and individual level, we must consider the following action items:
- Move from AI adoption phase to large-scale deployments across industry sectors to enable quick implementation of AI driven predictive cyber security protocols.
- Ensure the ethical, transparent and responsible usage of AI toolkits and systems to build trust between public and private entities, and to build a resilient national security framework.
- Focus investments and expenditure into domestic AI technologies for defense to maintain competitive edge over nations and like China as well as stateless threats
- Ensure the future symbiosis between human and machine by leveraging ethical AI principles. Without this, the US risks going down the path of rights violations that can exploit vulnerabilities and cripple cybersecurity structures.
- More young individuals need to engage with the government to help reinvent and future-proof current national security protocols.