Artificial intelligence (AI) recently received an endorsement from President Trump who this month signed an executive order directing federal agencies to allocate more resources for research and development, promotion and training in the emerging technology.
The administration’s goal under the American AI Initiative is to prepare workers to adapt to the new era, officials said. The executive order is a bit light on specifics, with no funding amounts disclosed, at least publicly, and no schematic for how each agency will disseminate or prioritize resources. The White House does, however, expect to see more detail on tracking expenses for AI-related R&D, Reuters reported.
“AI is something that touches every aspect of people’s lives,” a senior administration official told Reuters and other outlets last week. “What this initiative attempts to do is to bring all those together under one umbrella and show the promise of this technology for the American people.”
Last May, the White House held a meeting with some 30 major companies, including Ford Motor Co, Boeing, Amazon.com and Microsoft in which it pledged not to hinder AI’s development. The administration’s executive order appears to be the feds' step in that direction. Along those lines, prominent tech giants are extending AI beyond powering apps to social and humanitarian issues. Last October, Google pledged $25 million to launch a fund for AI research to address social and economic problems. Organizations chosen by the company will receive financial assistance, help from Google AI experts and computing resources.
An exploding number of cybersecurity startups are banking on developing machine learning and AI technology that can help guard against cyber attacks. While the upside of AI is systems that learn and adapt to the changing behavior of hackers, on the other hand attackers can foil security algorithms by infiltrating the data they train on and the red flags they hunt for.
Still, AI-centric security startups are flooding the market. CB Insights has compiled a list of 100 startups developing AI-related technology, ranging from hardware and data infrastructure to industrial applications. And, in 2017, the researcher put together a list of 80 privately-held cybersecurity companies using AI and operating in nine areas, spanning identity management to mobile, predictive, behavioral, automated, app security and more. Nanalyze did a similar thing as well with a list of six AI cybersecurity startups to watch for 2018.
Of note (although it’s debatable how much), Wired noticed that in Alphabet’s (Google’s parent) latest SEC filing, it cautioned investors specifically about AI technology, warning that “new products and services, including those that incorporate or utilize artificial intelligence and machine learning, can raise new or exacerbate existing ethical, technology, legal and other challenges which may negatively affect our brands and demand for our products and services and adversely affect our revenues and operating results.” (via Wired)
Six months earlier, as Wired noted, Microsoft issued a similar statement in its August SEC filing: “AI algorithms may be flawed. Datasets may be insufficient or contain biased information. Inappropriate or controversial data practices by Microsoft or others could impair the acceptance of AI solutions. These deficiencies could undermine the decisions, predictions, or analysis AI applications produce, subjecting us to competitive harm, legal liability, and brand or reputational harm.”
In other words, where it concerns AI, both companies don’t yet know the full extent of what they don’t know.
Risk factors in SEC filings are often a listing of banalities covering everything from economic conditions to the weather. And it may mean absolutely nothing that Alphabet and Microsoft both had something to say about the vagaries associated with AI development. But it also may say something about how open-ended is the AI landscape with unending possibilities that can also bring unknown problems are raise multi-cornered issues.