by Char Hunt | Senior TechChurn Staff Writer, Editor
February 21, 2025
More AI… what is it good for?
What’s left for humans in this ever accelerating digital world? Should we ask what things will be taken away, or what we lose? What jobs will disappear through obsolescence? None of these questions are new… not really.
Shouldn’t the question be what’s possible?
I would argue that we should really focus on the skills people already have.
We really need to talk about things on a human level... the things unique to people everywhere.
So, how do we frame the discussion?
We need to talk about the human part of work, life, organizations, cultures, leadership, and beyond.
Talking Points
Big Tech talks about how AI will help make our lives easier by removing drudgery. How it can free up more time for the things that matter.
Healthcare professionals have written about how AI will help staff (doctors, nurses, clinicians, etc.) optimize, streamline, and be more efficient. To free up more time for personal care of patients.
To be better.
We are collectively experiencing a fundamental shift. Quickly. And while the conversation is interesting, it needs more exploration.
What skills are unique to people as a whole?
Communication? Empathy?
Listening, or critical thinking?
In the recent past many people have been inspired by the speed of innovation, science & technology, and advancements. By what it means for work as we know it and what it means for institutions, organizations and the people who run them.
Over the last 2-3 years the reaction has been incredible — especially in the last 3 months where user adoption skyrocketed. We need to talk about the human part of everything.
On a Human Level
Schools at all levels will need to adapt to the tectonic shift because, as we trudge through these early days, the realization of a vastly different landscape means the next generations need good, solid foundations. That’s if things don’t sputter out in a haze of big tech competition, political wrangling, and rhetorical imperialism. Take for example the veep’s AI summit address in Paris last week.
He believes that “…AI will have countless revolutionary applications in economic innovation, job creation, national security, health care, free expression and beyond, and to restrict its development now will not only unfairly benefit incumbents in this space, it would mean paralyzing one of the most promising technologies we have seen in generations." It’s encouraging to see him, along with various sectors respond to these trends.
I just hope folks take a moment to consider the speed of such innovation against the risks, of which there are many. It’s early on and structural problems exist.
We here at TechChurn endeavor to raise awareness around cyber vulnerabilities, and it would be wonderful if remediation strategies were at the forefront of such AI / LLM discussions. Sadly, this is not typically the case, and in some ways, cybersecurity takes a back seat. Thankfully, it gained momentum over the last few years. But not for long if conditions remain as they are.
The Problem
I’m talking about things like data leakage, inadequate sand-boxing, prompt injects, and also unauthorized code issues, among other things. These factors are technological. We haven’t even touched on the political, ethical, and environmental costs. All I’m saying is that the rush to “AI first” is awesome — I’m all for it. However, let’s just keep an out eye out for the wheels on this wagon.
Kate Crawford, a leading AI expert and scholar, has some amazing insights on the impact of fast moving tech, i.e large-scale data systems, machine learning, and artificial intelligence innovation. She’s also the author of Atlas of AI1 which was published in 2021. In her view, shifting away from just focusing on things like ethical principles is beneficial to talking about power. And her analysis about de-centralized tech was prescient. She expands on the subject this way, “In other words, how do we make this a far deeper democratic conversation around how these systems are already influencing the lives of billions of people in primarily unaccountable ways that live outside of regulation and democratic oversight?”
The perspective is extremely insightful, in a time when we are witnessing wildly unrestrained automation in real time.
During last weeks AI Action Summit, the overarching themes centered on issues of sustainability, jobs, and public infrastructure. The global summit hosted world leaders and tech sector luminaries in Paris, announcing billions of dollars in investments on artificial intelligence infrastructure.
Here’s a micro list of the latest news on regulatory measures adjacent to AI and cybersecurity:
Tech companies say regulations can stifle innovation
Tech deals made to focus on AI infrastructure development
Trump reverts recent AI policy and CHIPS programs
AI Safety Institute and NIST expects cuts
Mass firings during ongoing local government purge
CISA staffers placed on administrative leave (US Department of Homeland Security's cybersecurity agency)
Calls for more policy change and leniency in Europe
France hosts February 10-11 global AI summit
Paris Action Summit
A few days ago, Europe and world leaders partnered with the private sector in an agreement to build AI safety and advance development initiatives. And top executives seem to agree that it’s right to set rules for the new technology.
The European Union's digital chief promised that the bloc will simplify its rules and implement them in a business-friendly way. This comes amid pressure on the EU to exercise a softer touch when it comes to AI regulation. It’s all a move to help keep European companies in the tech race. The other obvious stance was optimism and a favorable future in the face of uncertainty. Perhaps without some degree of irony certain aspects of power bubbled up to the surface. "I'm not here this morning to talk about AI safety, which was the title of the conference a couple of years ago. I'm here to talk about AI opportunity." — another talking point in the Veep’s first major international speech.
But this could become a problem in the aftermath of massive disruptions to infrastructure, coupled with high level forms of data destruction. There’s also a noticeable shift toward geographical concerns, economic growth and, well… power.
In case you missed it, our local government is not really interested in AI Safety. Despite stated plans to place America at the forefront of tech advancements, recent actions seems like a strange counter argument. Ironically, these rapid changes could serve to help opposition governments move closer to over-taking the global contest to control development and deployment of cutting edge technologies. With a roll-back on so many vital points of R&D, semiconductor production efforts, and regulatory concerns, it appears our local administration seeks to shed rules to boost US dominance in the AI realm, rather than drive things ahead. It’s baffling, to say the least.
Luckily, artificial intelligence experts and leaders bring some much needed perspective. Dr. Fei-Fei Li, professor of computer science at Stanford University, is leading an approach that focuses on what she calls human-centered AI.
Often called ‘the godmother of AI ‘, her framework looks at artificial intelligence in three distinct aspects
“One is that it recognizes AI as part of a multidisciplinary field; it’s not just a niche computer science field. We use AI to do scientific discovery, we want to understand AI’s economic impact, we want to use AI to super-power education and learning. It’s deeply interdisciplinary.”
She also believes we need to make sure we study and forecast what’s coming because there a lot of unknowns yet to be explored in the field. Li also points out that the most important use of a tool as powerful as AI is to augment humanity — not to replace it.
“When we think about this technology, we need to put human dignity, human well-being—human jobs—in the center of consideration.” As founding director of the Stanford University Institute for Human-Centered Artificial Intelligence, her new book, The Worlds I See: Curiosity, Exploration, and Discovery at the Dawn of AI2 offers deep insights into the technology as well as effective governance models. The concept of human-centered AI is the second tenet.
Don’t get me wrong. Growth and progress is very important within reason.
We need policy and decisions maker to standup norms and values to be built upon — in a literal and figurative way. Legalities and regulations are not so much fluff and nonsense. These things are put in place to protect human dignity, human lives, and freedoms.
As a technologist, I believe in guardrails, not gates. As a writer and creative, I think human well-being and imagination are paramount to the survival of people all over the world.
Without it, we would not have made progress up to this point. Without it, we cannot move forward to a bright future. The light of the human mind should not be extinguished by the might of sheer technological power, devoid of thoughtful design and deep consideration for what comes next. I agree with Dr. Li when she says,
“we should recognize human intelligence is very, very complex. It’s emotional, it’s compassionate, it’s intentional, it has its own blind spots, it’s social. When we develop tomorrow’s AI, we should be inspired by this level of nuance instead of only recognizing the narrowness of intelligence.”
It’s the third aspect of her AI framework.
When we think about this technology, we need to put human dignity, human well-being—human jobs—in the center of consideration.
— Fei-Fei Li, Stanford University
On the other end of the scenario, it’s hard to ignore the weekly deluge of database purging, staff cuts, departmental elimination, website take-downs, and egregious displays of what can only be described as the T-wit-ter-fic-ation of local government. Right now, many organizations are working to preserve vital climate, health, technological, and critical scientific data before it disappears.
Also, large swaths of the newly jettisoned workforce is supplanted by new systems designed to cut waste. New systems which are likely powered by bleeding edge tech. It’s unclear as to why things are happening that seem to counter stated claims about American leadership in the AI arena. Quantum chips… maybe? I have no idea. None of it makes much sense.
As a political moderate, I am not a member of the right's base, nor do I lean toward the left’s line of thought. I try to present my honest feelings about things as they happen and, if I’m being honest I’ve always had definite mixed feelings about both ends of the spectrum. That said, I chronicle things that tug at my own sensibilities — usually tech-driven aspects of modern life — and the forces that seem increasing aligned against us.
My Take
We have collectively under invested in the humanities over the last several decades. We discourage independent thought and simultaneously hold a less than favorable attitude about intellectualism. The next generations will need help. Especially when 20% of American adults suffer a lack of literacy and reading comprehension abilities. The US Adult literacy rate3 is the percentage of people — ages 15 and older — who can both read and write well enough to understand a short simple statement about their everyday lives.
Tools which could effectively wipe away these most basic skills during childhood formative years is problematic, without thoughtful planning and educational programs. Properly researching and designing learning experiences for K-12 age children can mean the difference between thoughtfully designed, useful data science education and AI literacy versus a virtual cheat code. (Video-gamers understand the concept.. iykyk.)
In my opinion, creative learning should go hand-in-hand, since both brain hemispheres need equal stimulation. It’s part of what used to be known as a classical education (a curriculum which include literature, math, history, civics, latin/other languages, the sciences, music and art.)
It brings us to another question. What do chat bots actually mean for student learning?4 And what does it mean for the future of generations tasked with the premise of upholding responsible AI and other ethical dilemmas? Oligarchs and digi-garchs notwithstanding, we have to find ways to work together — IRL and in the digital space. You may be asking, is that even possible now?
Let’s change that.
The current administration is driven to “maintain a pro worker growth path for AI, so it can be a potent tool for job creation in the United States." I guess that’s good, right? We find ourselves speeding towards divisions fueling disparate things, even as greater monolithic pillars shoehorn themselves into a crude existence. What rough beast is slouching towards Bethlehem? We’re truly at the beginning of a new era.
We have to find ways to work together — IRL and in the digital space.
Is that even possible now? Let’s change that.
Education and Awareness
Public education, particularly for the younger ones in our society, can ensure better more efficient progress in this arena. We need to help build and establish norms around not only the depth of potential, but also the limitations of such a powerful technology as AI.
We can’t see around corners, but we can construct legal guardrails to protect what matters most. Such things are being stress tested right now.
The story we tell about AI matters more than we realize. Right now it’s being used for unknown gains. So, the stories we tell need to include all perspectives, especially those recognizing the place of shared humanity in the equation.
I ask you to think about it. When your technical skills are eclipsed what will you bring to the table? Your humanity will matter more than ever before. Your ability to see humanity in others will also matter more than ever.
The story we tell ourselves is paramount to the shared reality we will ultimately create.
Educators and learners will need to help drive the effort. Employers are also newly minted educators with the ability to bring their workforce along. We have the responsibility to manage AI properly, pulling together to promote quality data alongside people centered growth and development. We all have the responsibility to manage it with a thoughtfully broad focused approach to cutting edge transformation. As the UN Secretary General puts it, "AI is not standing still. Neither can we."
The need for a collective global effort is extremely important for the good of all.
Want to know more? Stay linked for upcoming issues of TechChurn.
If you like this and other posts consider subscribing. If you know someone you think might be interested, tell them about it.
We like hearing from you!
Talk soon,
-Char
Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence (Yale University Press), https://yalebooks.yale.edu/book/9780300209570/atlas-ai
The Worlds I See: Curiosity, Exploration, and Discovery at the Dawn of AI, Flatiron Books/Macmillan Publishers, (November 2023) https://us.macmillan.com/books/9781250897930/theworldsisee
US Adult Literacy Rate - MacroTrends, https://www.macrotrends.net/global-metrics/countries/EAR/early-demographic-dividend/literacy-rate
What do AI chatbots really mean for students and cheating?, https://ed.stanford.edu/news/what-do-ai-chatbots-really-mean-students-and-cheating