- Published on
I got asked how I think about AI
- Authors
- Name
- Jonaz Kumlander
I got asked by a friend how I think about AI and how I see it should be used right now in the workplace. It ended in a long discussion but I think it could be worth sharing it here as well.
What about Agents?
I think Agents are a great way to use AI. They are easy to use and can be very helpful. But I think they are best used for simple tasks and not for assignments that require multiple steps to complete.
Let's say we have a LLM that is 80% correct in its answers. If we use an Agent to do a task that requires 5 steps to complete. There is a good chance that the Agent will fail to complete the task.
- Step 1 is 80% correct
- Step 2 is the 80*80% = 64% correct
- Step 3 is the 64*80% = 51.2% correct
- Step 4 is the 51.2*80% = 40.96% correct
- Step 5 is the 40.96*80% = 32.77% correct
- ...
So the more steps we have the more likely the Agent will fail to complete the task correctly.
So let's say that the task is to handle my week due to changes happening to my planning. Instead of being in the office I need to fly to US and I need an Agent to handle my week.
- Step 1: Check my calendar
- Step 2: Check my tasks
- Step 3: Check my email
- Step 4: ....
If I would just hand off this to an Agent it would be a disaster. So even if we ended up with 90% correct answers the Agent would fail to complete the task.
If we then instead took an Agentic approach and let the agent first answer on what steps it would plan and how they would be executed then I could go through the steps and correct them and add more steps if needed. Then I would let the Agent handle the task. This would give me suggestions on steps and the possibility to correct what needs to be corrected and following steps would also be more correct. I would still get a lot of efficiency out of the Agent since it would execute the steps for me.
I think this is the approach we need to have instead of dreaming of autonomous agents that can handle everything. This will give us a lot of efficiency already today.
So what I'm saying is don't wait for the autonomous agents. Start using the tools we have today and you will see a lot of efficiency gains already.
Then if we have tasks that are recurring and fixed on steps then we can use these agents. I call that execution agents and can be small or big. This is definitely a solution we should look into as well and start utilizing today.
So how should you think about AI
Reading the above you understand that there are many opportunities on efficiency with what AI gives you already today. I would also recommend reading my 2 basic rules with AI in this blog post.
So with the 2 basic rule as base I see AI to be a colleague of mine and that makes me more efficient. The colleague is the production side of myself and I will be focusing on the creative side of assignments. So what does that mean for you?
In my way of thinking this means that if you don't learn how to use AI tools and not focusing on developing your creative side of whatever you do you will be out of work in the future. The production side of work will be handled by AI and the creative side by us humans. Look at the industrial revolution and how workers were replaced by machines and later robots and humans work was as operators. I like to see the same change in AI.
So focus on your creative side and how to develop it and start using AI for the production side and developing your skills in how to use AI.
So what about the future
What happens in the future nobody knows but the vision of AGI as 100% autonomous to replace humans I believe is really further in the future if it ever will happen. So I'm skeptical to the visions shared on AI replacing humans already 2025 or 2026.
I recently read an interesting article that argued against the emergence of Artificial General Intelligence (AGI), now or ever. The author concurred with AI commentator Gary Marcus that AGI isn't arriving in 2025, but takes it a step further, claiming AGI will never arrive. The core issue, as the author explained, is that AI systems can't genuinely create meaning, a gap that computational power can't bridge due to AI's disembodied nature. Meaning, the article argues, arises from embodied experience and direct sensory engagement with the world, explained through Charles Sanders Peirce's semiotics, Susanne K. Langer's analysis of presentational symbols, and Alfred North Whitehead's theory of prehension. Peirce describes meaning-making as Firstness (pure, unmediated feeling), Secondness (the experience of resistance), and Thirdness (synthesis of Firstness and Secondness into a relational whole). Langer's idea of Gestalt, the fundamental perceptual form that anchors our initial, unmediated encounter with the world, shares a similar concept to Peirce's Firstness. Whitehead's modes of presentational immediacy and causal efficacy also illuminate how we encounter the world in its raw immediacy and potential. Meaning is constituted through symbolic expression, a process that Langer explores, dividing meaning-making into discursive and presentational forms. Presentational forms, found in the arts, convey meaning holistically, providing insights into human experience inaccessible through linear thought. According to the author, AI's real revolutionary potential is its ability to reveal the limitations of the modern ontological framework that created it. Reality exists in motion, relationality, and the triadic movement of Firstness, Secondness, and Thirdness, as well as in the expressive function of presentational symbolic forms. Meaning is inherently epistemological, ontological, and ethical, grounded in embodied, relational experience. The author argues that the most insidious narrative in AI is the possibility of AGI emerging and enslaving or destroying us; we are, in fact, ontologically tethered to these systems right now, bound in a relationship of meaning-making servitude. AI produces meaningless outputs, and any meaning ascribed to its data arises through re-embodiment in a historical, temporal, and feeling subject. The author concludes that the foundational ontology that gave rise to AI systems is incapable of producing AGI.
So with that in mind I'm not skeptical to AI and development but I think that at the moment we should focus on what we have today and get benefits of and make sure we as humans focus on developing our creative side and to use AI at its maximum.
Do you have to think about anything more
There is definetly a lot more to think about. To major things to remember that I will not go into depth with here is Data Privacy, Cyber security and sensitive/confidential information.
Data Privacy
What data are you allowed to feed a model with and where is the data processed. If you are in EU you have to consider GPDR and the AI Act.
Cyber security
You need to make sure the tool is secure and understand how the data input that you provide is handled. What information/data on your laptop or in the cloud can it access. Let say you connect your cloud drive like OneDrive or Google drive and give the tool full access then you really need to understand how it access the information you have stored.
Sensitive and/or confidential data
If you decide to use sensitive or confidential data in the AI tool you really need to understand if the model is using your input to train and there for it can be used and viewed by other users of this tool.
If you want to know more please let me know :)