How smart are we about artificial intelligence?

Within the last 15 years, the world has seen an explosion of data on a scale that makes the Gutenberg printing press impact pale in comparison.

Individual pieces of data thrown into the Internet by people as they moved about their daily lives have contributed mightily to big data, and that in turn to Machine Learning, an application of  artificial intelligence  that provides systems the facility to automatically learn and improve from experience without being explicitly programmed.  In recent years AI has not only made dramatic improvements, it has transformed how we access and consume information, make decisions and even affect outcomes.

For all of its progress, our proficiency to understand its far reaching impacts and to formulate reasonable policies well in advance has lagged behind, generating uncertainty.  This uncertainty has bred both unbridled optimism as well as foreboding pessimism.  

This is not a new sensation — in the 16th century, Conrad Gessner was arguably the first to raise the alarm about the impacts of information overload.  In his groundbreaking book he described how confusing and harmful to the mind and the psyche the seemingly unmanageable deluge of data and information would be.  As in Gessner’s era, some concerns will turn out to be patently wrong, such as those about the printing press.  Some will be broadly correct and moderately relevant, for example the fear that TV would hurt the radio industry. Some will be broadly correct and deeply relevant, for example the fear that robots will take over many jobs and render some occupations obsolete. 

So, what is to be done about these risks? How do we view ethics in light of decisions that would be made by us in programming AI or that AI will make in the course of performing its functions? 

If personal data is currency, then whoever we are giving our data to already have great power and will have even greater power over our digital and real lives.  Since these entities such as  government or big tech –  are the very same ones who develop and employ AI, should we be wary? Will AI extend or even enhance what it means to be human? What sensible safeguards should we implement now to reap the benefits but forgo the potential harms?

Join us as we explore these questions and more with our guests:

  • Joanna Bryson: On the Nature of Intelligence and AI (January 13, 2021)
  • Carl Gahnberg: AI — not your average governance challenge (January 20, 2021)