• How to Propel the US into a Sustainable Leadership Position on the Global Artificial Intelligence (AI) Stage
By Naveen Rao and David Hoffman
At Intel, we believe that being a catalyst for change comes with the responsibility of ensuring the world is prepared for the technological transformations we help usher in. This has never been truer than it is for artificial intelligence (AI). AI has the potential to impact industries and workforces in profound ways, even as it unlocks enormous economic potential.
AI is more than a matter of making good technology; it is also a matter of making good policy. And that’s what a robust national AI strategy will do: continue to unlock the potential of AI, prepare for AI’s many ramifications, and keep the U.S. among leading AI countries. At least 20 other countries have published, and often funded, their national AI strategies. Last month, the administration signaled its commitment to U.S. leadership in AI by issuing an executive order to launch the American AI Initiative, focusing federal government resources to develop AI. Now it’s time to take the next step and bring industry and government together to develop a fully realized U.S. national strategy to continue leading AI innovation.
Historically, when technology development has been coupled with thoughtful regulation and meaningful citizen-focused protections, the U.S. has been an unmatched innovation powerhouse. We at Intel are excited that the technology community continues to deliver on the promises of AI. But to sustain leadership and effectively manage the broad social implications of AI, the U.S. needs coordination across government, academia, industry and civil society. This challenge is too big for silos, and it requires that technologists and policymakers work together and understand each other’s worlds.
We believe a national strategy that embraces these ideas will promote U.S. leadership in AI, help citizens embrace the value and benefits of AI technology, and enable those benefits to be realized sooner. As such, we have worked with our teams of technologists and policymakers to propose such a plan, building on a call to action we first made last May.
Four Key Pillars
Our recommendation for a national AI strategy lays out four key responsibilities for government. Within each of these areas we propose actionable steps. We provide some highlights here, and we encourage you to read the full white paper or scan the shorter fact sheet.
Sustainable and funded government AI research and development can help to advance the capabilities of AI in areas such as healthcare, cybersecurity, national security and education, but there need to be clear ethical guidelines.
- Make specific funding commitments to AI research and development, starting with a study to determine the areas with the most potential for deployment of AI — including socially beneficial initiatives such as climate change and education.
- Develop responsible government policies for AI that can both help the government’s efforts to engage the private sector and inform the development of international norms.
- Support international cooperation and data interoperability standards that facilitate sensible cross-border data sharing as well as intellectual property protections.
Create new employment opportunities and protect people’s welfare given that AI has the potential to automate certain work activities.
- Invest in the development of a diverse workforce that creates AI and uses AI. Continuous education is the strongest and most widely agreed-upon approach to developing a workforce capable of creating AI systems. Curricula must be updated and investment in higher education bolstered.
- Support programs dedicated to skills retraining and continuous lifelong learning to ensure that everyone has a place in the transforming workplace.
- Undertake a “National Service” study to examine how a broad-based public/private partnership network of national service opportunities might alleviate potential job loss, while rebuilding infrastructure and developing skills in the use of technology.
Liberate and share data responsibly, as the more data that is available, the more “intelligent” an AI system can become. But we need guardrails.
- Encourage transparency in the use of data and the development of national data protection regulations to protect privacy while allowing for the innovative and ethical use of data.
- Pass comprehensive U.S. privacy legislation that improves the FTC’s ability to mitigate individual and societal harm.
- Develop international data interoperability standards to speed the evolution and adoption of AI applications.
Remove barriers and create a legal and policy environment that supports AI so that the responsible development and use of AI is not inadvertently derailed.
- Where necessary, regulate thoughtfully on principles rather than regulating specific algorithms.
- Expand general legal principles to AI to assess if existing laws and regulations that may prevent autonomy in certain tasks are still justified.
- Protect innovation and IP globally, avoid requiring companies to transfer technology or IP as a condition of doing business, and use trade agreements and diplomacy to advance these goals.
Working Together for Leadership
When the regulatory environment is known and understood, businesses and government can maximize their impact by pursuing the same goals. At Intel we look forward to partnering closely with policymakers to realize AI’s benefits to society as we advance our technology portfolio.
• Naveen Rao is corporate vice president and general manager of the Artificial Intelligence Products Group at Intel Corporation.
• David Hoffman is associate general counsel and global privacy officer at Intel Corporation.