It’s hard to imagine we could get through these pandemic times without the Internet to support our remote working, provide our entertainment, connect us to loved ones and casual acquaintances, and provide us information about … everything. Yet, this is no calm, silent library filled with tomes of knowledge — the fight over which “truth” is “right” and whether or not science is optional is loud, raucous, and driving much of our real world political and physical reality.
In this arc, we talk with Michael Goldhaber, who observed that “attention” was the basis of the new economy as far back as 1997. His predictions of its potential consequences — division, disparity of power, deepening inequality, disproportionate advantages for the most attention hungry and shameless, and the erosion of privacy — have been vibrantly accurate.
But, how can bizarre conspiracy theories like QAnon gain traction and explode in popularity online, to the point where many people forsake the family and friends they know to join a band of strangers united in fervent yet false beliefs? We talk with David Troy, a network analyst and disinformation specialist who has studying how communities form online and collaborating with researchers and analysts from around the world to not only understand disinformation, misinformation, radicalization, and extremism, but to actually expose and fight it.
Of course, knowing why and how these things happen is only half the battle. What can we do about disinformation and its spread? We talked with Debra Levoy to learn what she and the Reality Team are doing and discuss the important questions: Can you provide an alternative view – this time, a fact-based one in such a way so as to not only to educate people but also dissuade them from going down the rabbit hole of falsehoods and conspiracy theories? And if so, how effective can you be?
- Michael Goldhaber: Paying for It in the Attention Economy (March 17, 2021)
- David Troy: Gaming the Players – QAnon (March 24, 2021)
- Deb Lavoy: Cleaning Up after Dirty Disinformation Tricks (March 31, 2021)
While it’s long been said that you can’t believe everything you hear or read, that’s now more true than ever. What can you believe? Certainly not your own eyes — “deepfake” videos manipulate content so that recognizable people appear to be saying things they have never, in fact, uttered. That sort of thing may be done for innocuous fun, or with the malicious intent of discrediting the actual person. And deepfake videos are just one way in which reality is being undermined in our online activities, as whole campaigns of “disinformation” are being carried out to achieve the ends of the campaigns’ instigators. Those ends are not necessarily simple or small scale: recent elections around the globe have seen citizens targeted with disinformation with the expectation of impacting the election outcome. That sort of interference in nations’ actions is often considered an act of war.
Our guests for this arc help to explore the machinations of deepfakes and disinformation campaigns that have been observed. We’ll talk about how deepfakes are built and trained to trick us, how disinformation campaigns are constructed and observed, and what we might do to give ourselves a fighting chance to level the playing field in this easily altered reality
- Sara-Jayne (SJ) Terp: The disinformation playbook (January 27, 2021)
- Nicolas Papernot: Can advances in technology help liberate us from the grip of disinformation? (February 3, 2021)
- Andrew Grotto: (How) Can we dig ourselves out of a deepfake hole? (February 10, 2021)
Within the last 15 years, the world has seen an explosion of data on a scale that makes the Gutenberg printing press impact pale in comparison.
Individual pieces of data thrown into the Internet by people as they moved about their daily lives have contributed mightily to big data, and that in turn to Machine Learning, an application of artificial intelligence that provides systems the facility to automatically learn and improve from experience without being explicitly programmed. In recent years AI has not only made dramatic improvements, it has transformed how we access and consume information, make decisions and even affect outcomes.
For all of its progress, our proficiency to understand its far reaching impacts and to formulate reasonable policies well in advance has lagged behind, generating uncertainty. This uncertainty has bred both unbridled optimism as well as foreboding pessimism.
This is not a new sensation — in the 16th century, Conrad Gessner was arguably the first to raise the alarm about the impacts of information overload. In his groundbreaking book he described how confusing and harmful to the mind and the psyche the seemingly unmanageable deluge of data and information would be. As in Gessner’s era, some concerns will turn out to be patently wrong, such as those about the printing press. Some will be broadly correct and moderately relevant, for example the fear that TV would hurt the radio industry. Some will be broadly correct and deeply relevant, for example the fear that robots will take over many jobs and render some occupations obsolete.
So, what is to be done about these risks? How do we view ethics in light of decisions that would be made by us in programming AI or that AI will make in the course of performing its functions?
If personal data is currency, then whoever we are giving our data to already have great power and will have even greater power over our digital and real lives. Since these entities such as government or big tech – are the very same ones who develop and employ AI, should we be wary? Will AI extend or even enhance what it means to be human? What sensible safeguards should we implement now to reap the benefits but forgo the potential harms?
Join us as we explore these questions and more with our guests:
- Joanna Bryson: On the Nature of Intelligence and AI (January 13, 2021)
- Carl Gahnberg: AI — not your average governance challenge (January 20, 2021)
Do you ever wonder what really happens with the digital footprints you leave?
By now, most people are familiar with the fact that “their information” is being collected and shared with third parties they’ve never even heard of. As with many things, this was happening well before the digital age, but the Internet has made the practice bigger, faster, better and — more international.
Whatever you may think of your own government collecting data about your activities, have you considered the possibility that other governments may be creating a picture of you? Aynne Kokas discusses the implications and realities in the controversy around the TikTok ban in the US and concerns about American citizens’ information being ritually collected by the country of origin of that service.
And even if that data is not being exfiltrated to foreign lands, should you be concerned about how your habits and activities are being used to shape your opinions and lead you down the garden path, so to speak? Bruce Schneier explores with us the implications of amassing data about users on the Internet.
Finally, as noted above, tracking peoples’ activities is not unique to the Internet. Retailers have long been planning store layouts to guide customers to particular items (impulse buying fresh baked goods on entering the store! being unable to escape the IKEA maze without passing through every department!), and now the observations and inferences of behaviour have gone high tech and tightly integrated across the Internet. Keith Kirkpatrick discusses the highs and the lows of retailers tracking customer footprints — digital and otherwise.
Join us for this timely topic as we traipse digitally through end of year and online holiday activities!
- Aynne Kokas — When your online activities leave footprints on foreign soil (December 9, 2020)
- Bruce Schneier — How your digital footprint makes you the product (December 16, 2020)
- Keith Kirkpatrick — Real world retail comes with a side of digital surveillance (December 23, 2020)
We are back with another arc of episodes! This time, the focus is on economics of collaborative innovation.
Some of us like to think of the Internet as a global public good, developed with collaborative contributions of the brightest technical and social minds on the planet. At the same time, the reality is that networks are built by private interests, and national interests are increasingly being brought to bear to try to shape the reality of the Internet experience within and across borders. The common driving force is often economics.
So, when it comes to innovation, how does collaboration work, economically, between competing private interests? Especially when those interests are entire nations, grappling with globalization?
Join us for 3 fascinating discussions to explore these topics!
- Maria Farrell — Collaboration and the globalized world (October 21, 2020)
- Patrik Fältström — Open Standards and Collaboration Among Competitors (October 28, 2020)
- Konstantinos Komaitis — Economics of Technology and Collaboration (November 11, 2020)
(Note the gap week… not publishing anything in the week of the US election!).
Consolidation sounds like a good thing — from https://www.lexico.com/en/definition/consolidation:
- the action or process of making something stronger or more solid.
- the action or process of combining a number of things into a single more effective or coherent whole.
But, in all things, such progress may have negative consequences. Bruce Schneier observed:
For decades, we have prized efficiency in our economy. We strive for it. We reward it. In normal times, that’s a good thing. Running just at the margins is efficient. A single just-in-time global supply chain is efficient. Consolidation is efficient. And that’s all profitable. Inefficiency, on the other hand, is waste. Extra inventory is inefficient. Overcapacity is inefficient. Using many small suppliers is inefficient. Inefficiency is unprofitable.
Efficient systems have limited ability to deal with system-wide economic shocks. Those shocks are coming with increased frequency. They’re caused by global pandemics, yes, but also by climate change, by financial crises, by political crises. If we want to be secure against these crises and more, we need to add inefficiency back into our systems.https://www.schneier.com/blog/archives/2020/07/the_security_va.html
When it comes to Internet technologies, implementation and deployment across geographies and corporations, this sort of “tidying up” and “simplifying” of the ecosystem can have some unexpected and unintended consequences.
We talked to experts in Internet technologies to explore the trends of Internet consolidation, and some of the implications for its users and would be innovators.
Join us for these upcoming podcasts:
- July 15, 2020: “Internet Consolidation and the need for diversity”, with Paul Vixie
- July 22, 2020: “Internet Consolidation and the Rise of Content Distribution Networks”, with Russ White
- July 29, 2020: “Internet Consolidation – Who is Resolving Your Query?”, with Dr. Roxana Radu and Michael Hausding
Our next arc of podcasts focuses on Artificial Intelligence, and what to expect as it becomes more mainstream.
Although AI has been a research topic for decades, it is only now slowly starting to seep into mainstream technology experience. Even so, industry leaders and tech experts have formed views of what they expect — for good or ill — from AI as it infiltrates society. Lee Rainie, of Pew Research, collected hundreds of viewpoints and shared the diversity of perspectives with us.
But when is it artificial intelligence”? We talked with Dr. Robert Epstein about how AI programs can fool us into thinking they are sentient — and what would that mean.
Clearly, AI-driven technology is having, and will increasingly have, a significant impact on innovation as well as our daily lives. Our final guest in the arc, Dr. Colin Garvey, explains his view on how we should approach governance of AI development, in order to ensure we prosper in the world we are building.
Here’s our schedule for publishing these diverse and lively podcast episodes!
- June 24, 2020: “Artificial Intelligence and the Future of Humans”, with Lee Rainie of Pew Research
- July 1, 2020: “Artificial Intelligence and Sentient bots”, with Dr. Robert Epstein
- July 8, 2020: “A Zen Approach to Making AI Work for All of Us”, with Dr. Colin Shunryu Garvey
We are excited to start a new TechSequences podcast arc this week: COVID19 and digital contact tracing apps.
As with any epidemic, tracing the contacts of those who have been infected is an important tactic to help contain the reach of the virus. Tracing efforts have to be consistent and constant. Doing the work manually requires skilled personnel, and with the existing reach of the pandemic, it’s challenging to cope with the scope without some digital help. Thus, some form of digital contact tracing is probably inevitable as regions around the globe seek to return to some level of normal business and life activity.
For this pair of podcasts, we look at the potential policy and privacy implications of how contact tracing apps are used, and we have a look at the spread of apps already in use or development around the globe.
The podcasts and planned release dates are:
- June 10, 2020: COVID19 Contact tracing apps, a fair bargain for public health? with Konstaninos Komaitis
- June 17, 2020: The 411 on Contact Tracing Apps, with Patrick Howell O’Neill
Follow us to get the notification as soon as they are published!
Today we launch the first in our series, or arc, of episodes on Security and Privacy! Check out the podcast, at the link below.
In this series of podcasts we examine the relationship between security, privacy, and digital identity. With the help of our distinguished guests we look at how the definition and expectations of security digital identity and privacy have evolved. We look back at the assumptions underpinning the internet architecture and the consequences of those early decisions on issues facing us today in each of these realms. We also look at emerging services and technologies (Artificial Intelligence, Internet of Things, Big Data, etc.) to anticipate how they may change our expectations and anticipate the unintended consequences.
Today’s episode gives you a preview of the whole arc. Be sure to tune in weekly, as the rest of the arc’s podcasts are released:
- April 29, 2020: Marc Rotenberg
- May 6, 2020: Suzanne Woolf
- May 13, 2020: Eve Maler
For today’s episode, click here: https://www.techsequences.org/podcasts/2020/04/overview-security-privacy
See all the podcasts, here: https://www.techsequences.org/podcasts/
I have a confession to make: we’ve been sitting on some recordings of fun discussions for weeks now, as we’ve been putting the necessary pieces in place to get them out to the world (i.e., this website and podcast hosting). There have been a few distractions along the way — that beer-virus-thing that’s keeping us all at home and yet somehow not making more hours in the day.
But, at last, we are publishing episodes! Click the “Podcasts” link above, or go to https://www.techsequences.org/podcasts/ . Be sure to subscribe to the RSS feed (and if you already have — please do so again, because the feed has changed).
That brings up the final point: there are some pieces of this that are still a bit creaky, and we’re working on fixes even as we go live. If you see something — say something, especially if you know how to fix it :^) .