Last week, I flew to Austin, Texas to attend South by Southwest, which brings together innovators in media, tech and entertainment. I was excited to see my Ad Council colleagues speak about the power of partnerships for our Seize the Awkward campaign at the SHE Media Future of Health Co-Lab and host a panel on the power of championing skills-based hiring to revolutionize and diversify the workforce. And with over 25 tracks of programming, SXSW 2023 was a fascinating place to get insights on everything from GPT-4 to ethical AI and accessibility in tech. Here are the top five topics that fascinated me at this year’s festival.
Ad Council’s Margaret Files speaking at SHE Media Future of Health
Your digital identity is a human right
A common debate across SXSW sessions seemed to be about whether it’s more important to embrace the emerging technology in our lives, or to protect ourselves from it. I listened in on a keynote session with Chelsea Manning, a technologist, activist and whistleblower who infamously disclosed classified WikiLeaks documents. Her panel, The Future of Privacy on the Web, covered the broad topic of data collection versus privacy, and her efforts to privatize the internet through cryptography. Manning argued that data privacy is a human right that protects activists, journalists and everyday citizens on the internet. She made the point that data privacy is necessary to prevent the digital tracking of abortions in the state of Texas, where abortion is now illegal.
In another keynote session I attended, The Metaverse Mindset for Web3, AI and the Future of Business, I learned that people will now have more ownership and agency over their digital identities in the new and improved Web3. In this talk, Sandy Carter, Senior Vice President and Channel Chief at Unstoppable Domains, explained that if your identity follows you through the metaverse, shares information about you and represents your own personal brand, you should have a say in how your data is used and which companies benefit from it. In Web3, the goal is for data about individuals to be stored with the individuals themselves.
Ethical AI and humane approaches to innovation
With the rise of the metaverse, Web3 and ChatGPT, another ongoing conversation at SXSW was about how we make these technologies work for society, rather than against it. While privacy is one aspect, one of the more interesting panels I attended leaned into how we ensure innovations remain ethical.
At How Purpose Can Guide Responsible Tech, panelists argued that instead of demanding “human-centered tech,” we should aim for a “humane-holistic approach” to tech. To them, the idea of human-centered tech is incomplete because technology will impact our relationships with each other, ourselves, and the technology itself. If we’re concerned that the rise of robots and AI is inhuman, we should make sure that technologists with backgrounds in UX, anthropology and sociology are part of the innovative process.
In this panel, they mentioned that 64% of people lack confidence that AI will make a positive impact on society’s future. One of the panelists, Tricia Wang—co-founder of Sudden Compass, a consulting firm for tech startups, and an “ethnographer” whose job is to bring humanity to emerging tech—explained that language models like ChatGPT are built for “prediction, not purpose,” since they are designed to generalize human behavior. To curb this, she argued that representation and equity need to be central in order to create empathetic design. When it comes to AI, social scientists and those who understand human behavior need to be at the table.
“Calling an innovation ’purpose-driven’ isn’t enough. Most entrepreneurs believe they’re creating something that will fix the world, so values and representation needs to be stated and transparent up front.” - Tricia Wang
At our panel, Driving DEI Through Skills-Based Hiring, the topic of AI broadened to its impact on our workplaces. Aneesh Raman, vice president and head of the Opportunity Project at LinkedIn, discussed how companies are struggling to keep up with the rapid pace of change. But amid rapidly emerging tech that will impact jobs, Raman encouraged companies and employers to remain human-centered when thinking about talent.
“The digital age has made it impossible for companies to keep up with technologies…the only way to solve that is to put skills at the center.” - Aneesh Raman
Training employees on these emerging technologies is the best way to make sure they support the goals of companies, while also giving those employees the tools they need for success. For example, Raman suggested media and communications professionals consider a course in ChatGPT to understand the impact it will have on SEO, blog writing and other aspects of their jobs.
Panelists (left to right) Jill Kramer, Jonathan Adashek, Aneesh Raman, and Bridgette Gray speaking at Driving DEI Through Skills Based Hiring panel
Accessibility and equity in tech
Another critical component of the AI discussion was about how and why DEI needs to be considered at every stage of development. This means having representation in the design, testing and implementation so algorithms aren’t amplifying implicit biases or harmful stereotypes.
Since language models have the power to scrape tasks from writers and lawyers (GPT-4 has successfully passed the Bar Exam!), they need to thoughtfully and accurately represent all communities.
Though GPT-4 is now said to be fluent in more than 20 languages, these language model technologies are often built and tested with English speakers in mind. Tools that will impact all our lives need to work for all languages, dialects, cultures and ability levels.
One fascinating talk I listened to was from Sarah Kane, one of the few blind astrophysicists who’s dedicated herself to the sonification of outer space— she creates music from the data from NASA telescopes. When asked why it’s important to build in auditory accessibility for astrophysics, Kane used the example of the dark mode feature on our iPhones. “When enough people with disabilities demand accessible tools,” she said, “those tools become mainstream options.”
The need for education and tech literacy
As new technological tools rapidly evolve, so too must laws and regulations. But as lawmakers are often lacking education on emerging tech themselves, education and tech literacy becomes increasingly important.
In the panel on ethical AI, the panelists urged tech companies to attach public literacy tactics when launching new tech. Beyond education for users and legislators, we need to be sure we are providing kids in schools with the tools they need to keep up with the pace of change.
Is the pace of change happening too fast for brands to keep up?
The stunning leaps forward in these and other technologies have taken place at a time when cultural, political and environmental changes all seem to be dramatically accelerated. It feels nearly impossible for brands to keep up. One panel I listened in on was Brand POV in a Crisis, talking about if and when brands should speak out on society’s most pressing issues. My takeaway from this panel was that brands need to make sure the issue aligns with their business model and their purpose, otherwise they may find themselves unequipped to productively contribute to the conversation, and if they aren’t taking meaningful action to back up their messaging, their efforts might come off as performative or worse.
Sometimes, it's more important to not speak out to the public and prioritize speaking internally to make sure your employees know where the organization stands. But first and foremost, the panelists all agreed that developing a “coordinated response process and a crisis communications readiness plan”is the best first step.