Blog The Human Role in AI Development: Finding the Balance
By Carmen Taglienti / 6 Jun 2024 / Topics: Change and training Data and AI Modern workplace Digital transformation
By Carmen Taglienti / 6 Jun 2024 / Topics: Change and training Data and AI Modern workplace Digital transformation
The transcript below is edited and condensed for clarity.
There's a lot going on as it relates to responsible and ethical AI development. Humans are using these systems to help us to become better human beings — to learn more, to advance our own intelligence and logic — and I believe responsible AI involves using AI as a copilot. In that same vein, we shouldn't fall into the trap thinking AI is our servant and that we can sit back and let AI do everything for us.
The other side is the ethical side. I tend to bind the terms “responsible” and “ethical” together, but ethical AI use is being thoughtful and considerate about appropriate use cases. For example, using AI to get information about someone that may be a violation of PII, or using that information to steal someone’s copyrighted material — are use cases with ethical implications.
I like the concept of “trust but verify” where we embrace AI to assist with human processes and remove the mundane — but we also allow for human-level reasoning decisions. An AI workflow can be built to include human assistance. Creating clarity around when humans get involved and establishing that we have a place in an AI work world goes a long way in building trust and working together with AI, not separate from or against it.
I often hear this concept of being the CEO of your own company in the realm of large language models. I’m guiding my “agents” to do whatever I need them to do as it relates to what I need to accomplish; they tell me when they’ve accomplished the task, and then I’ll take a look and verify that what they’ve done is correct and in line with what I want to do.
AI is moving in a direction where we can answer questions at the speed of thought. When you find individuals that are really good critical thinkers, they want to keep learning more — so you can measure productivity simply by listening to people talk about the things that they were able to accomplish through technology.
We’re still in the early days, so how I think about productivity is through engagement. But in the literal sense, it’s as simple as answering the question: Were you able to get the job done in the time period which you thought you could get it done?
One of the biggest challenges for humans, and being an educator myself, I know that it takes a long time to accomplish what's called transfer learning. Basically, you go to class, you learn about cloud computing or cybersecurity, and it takes a long time to be able to do it. But if you have an expert that knows it, now you can take that transfer learning and move faster. In a utopian world, AI lends itself to a truly collaborative learning process where critical thinking is even baked into the model.
Hopefully, AI will eventually allow our brains to develop in a more reasoning and critical-thinking way because we won’t have to go back to ground zero every time when we’re learning something — we can leverage these intelligent agents to help us.