Thinking Matters: A Human First AI Approach
Recently, my wife and I were sitting on the couch watching a show. I can’t remember which one, or even which streaming service, but during the commercial break, an ad caught my attention. It showed a younger guy sitting through a sales presentation, clearly bored. Across from him was an older gentleman doing his best to communicate the information through slides filled with charts and sales figures.
Instead of engaging with the information, the younger guy snapped a photo of the slides, fed them into an AI tool, barely glanced at the results, and immediately sent the findings to his boss or co-worker, who knows. The ad framed this as efficiency. Problem solved. Move on.
But that moment stuck with me, and not in a good way. The underlying message was clear: understanding the information no longer matters. AI will handle it for you. That idea felt reckless and, honestly, a little disturbing. It made me think about how relying on AI without a fundamental understanding can easily lead us to do the wrong thing, automating mistakes rather than catching them.
When Convenience Replaces Understanding
Around the same time, I was having conversations with a few high school students about AI. Two separate students each shared stories about final exams in which teachers allowed open notes… along with the use of AI to help them answer their questions. Hearing that stopped me in my tracks. Not because students were using tools, but because this practice was sabotaging the higher aims of learning – comprehension, application, and analysis – had been so casually removed from the equation.
When you connect these two examples, the ad and the classroom. You start to see the potential consequences of irresponsible or inappropriate uses of AI. The danger isn’t the technology itself. Honestly, who wouldn't like quicker access to the information you are looking for? No, the risk of AI is that it can become a substitute for thinking rather than a tool to enhance findings. In short, the most significant risk of extensive AI use is that it will be seen as a replacement for higher-level thinking and understanding.
While AI can generate answers quickly, the potential drawbacks challenge our fundamental beliefs about what it means to “know” something. Researchers used brainwave scans (EEG) to monitor students' writing under different conditions: unaided, using Google, or using generative AI such as ChatGPT. The data revealed a shocking 47% collapse in brain activity within the frontal-parietal and semantic networks, which are the brain's command centers for executive function and deep thinking, among those who relied on the AI from the start (National Library of Medicine).
This isn't just about laziness; it’s about "cognitive offloading." When we outsource the effortful process of synthesis and articulation, we bypass the deep encoding essential for memory (AI Offloading).
Why Boundaries Matter
This isn’t an argument against AI. I use AI regularly. I believe in its potential. But I also believe in establishing boundaries, having clear lines between human judgment and machine assistance. While AI can deliver vast amounts of relevant information and detect insightful patterns in nanoseconds, as fantastic as these tools are, these intelligent networks cannot replace human intent, discernment, and accountability.
When we blur those roles, we don’t gain efficiency. We lose depth. And over time, that loss compounds. The best way I know how to explain this balance is to be transparent about my own process.

AI Fundamentals
At its core, AI is a collection of technologies that enable these machine learning systems to process data, recognize patterns, and generate insights. In other words, AI is not a physical entity but a virtual system that is designed to analyze large amounts of information, helping us make informed decisions across business, healthcare, and education.
The fundamentals of AI center on data analysis, algorithm design, and model training. Through machine learning, AI systems learn from data and improve over time, enabling them to identify patterns, predict outcomes, and personalize recommendations. AI is particularly effective at working with complex data sets and surfacing connections that might otherwise go unnoticed.
However, as powerful as these technologies are, they are not infallible. Consequently, human thought and oversight are crucial to ensuring that AI systems operate fairly, responsibly, and are aligned with societal values.
Critical thinking skills are more important than ever. Whether you’re evaluating AI-generated information, questioning the assumptions behind an algorithm, or making decisions based on AI insights. By thinking critically and understanding the process, we can harness its potential while safeguarding against unintended consequences.
Ultimately, the development and use of AI technologies must be guided by a commitment to ethics and the well-being of society. As we continue to apply AI in our daily lives, maintaining a human-first approach is vital, and human judgment remains at the center.
Applied Sciences and AI
The integration of artificial intelligence into applied sciences is transforming how we tackle real-world challenges and make sense of complex data. In fields like healthcare, education, and finance, AI technologies, especially machine learning and generative AI, are enabling researchers and leaders to process volumes of information, uncover hidden patterns, and generate answers to questions that once seemed out of reach. These AI tools are not just accelerating research; they’re opening up new possibilities for innovation and discovery.
However, as AI systems become increasingly embedded in applied sciences, the importance of critical thinking skills and ethical considerations grows. It’s not enough for AI to deliver results; those results must be trustworthy, fair, and transparent. That’s where an ethical AI framework comes in. By establishing clear guidelines for fairness, accountability, and responsible AI development, we can ensure that these technologies are used to support intended decisions that drive positive outcomes without sacrificing human values.

How I Use AI Without Giving Up the Wheel
To illustrate what I believe is the proper approach to using AI, I will describe how I built an article for Rhynos Crossing. I start by identifying a topic worth exploring, usually sparked by something I’ve observed, questioned, or wrestled with personally. This early stage involves research, reflection, and framing the idea through my own experience. The intent, the point of view, and the curiosity all originate with me. AI doesn’t suggest a topic or define the angle. That responsibility stays with me. It is human-led thinking.
From there, I move into drafting. I write the first, unpolished, free-form version in a text editor. This stage is thinking in motion. Allowing ideas to surface naturally, patterns to emerge, and my voice to take shape. I don’t try to make it perfect, and I don’t invite AI into the process. I want something honest and real on the page before introducing AI into the process.
Once a draft exists, I bring AI into the mix as a reflective partner, not a creator. At that point, AI becomes a reflective partner, not a creator. These tools rely on advanced AI algorithms to analyze drafts, identify patterns, and provide feedback for improvement, which helps support the suggestions they offer.
I may ask ChatGPT for suggestions and feedback during this process to further examine my work from different perspectives. However, this step helps me examine my article more critically. My original thinking remains intact.
After that, everything returns to human judgment. I review each suggestion deliberately and decide what stays, what changes, and what gets removed. Tone, emphasis, and meaning are refined through choice, not automation. AI can propose options, but it doesn’t decide what the piece means. That responsibility stays with me.
Once the message is solid, I move into optimization. The refined article uses an SEO service that is designed to improve discoverability and align with search intent. This step is strictly technical. It focuses on structure and optimization, but it does not alter the voice or the core message.
Before publishing, I do a final pressure-test the article using Gemini through a custom “GEM” modeled after authors I admire and respect. This step isn’t about rewriting the piece. It’s about asking more challenging questions. Do the ideas hold up? Is the logic sound? Is the narrative clear and honest when viewed through seasoned perspectives?
More often than not, I end this phase by asking a straightforward question: How can this be even better? Below is a simple summary of my article creation process.
A Simple Recap of My Process
- Human-Led Thinking: I choose the topic, define the intent, and frame the questions. Curiosity and perspective always start with me.
- Human-Led Drafting: I write the first draft free-form, allowing ideas to surface naturally and my voice to take shape without interference.
- AI-Assisted Reflection: Once a draft exists, I use AI as a reflective partner to surface clarity gaps, strengthen structure, and challenge assumptions.
- Human Judgment: I make deliberate decisions about what stays, what changes, and what gets removed. Meaning and tone are refined through human discernment.
- Machine Optimization: I use SEO tools strictly for technical alignment and discoverability, without altering the message or voice.
- AI-Assisted Pressure Testing: I stress-test the ideas to ensure the logic holds, the narrative is straightforward, and the argument is honest.
- Human Approval: I give the final read, add visuals, and confirm one last time that the article still sounds like me before publishing.
Finally, I return to human approval. I add images and captions, give the article a final read, and ask myself one last question: Is this the intent and message I want to deliver to my audience? Is the information I am getting helpful and meaningful? If "yes," then I publish only after that confirmation.
That’s the line I won’t cross. AI can assist the process, but it doesn’t replace responsibility, judgment, or voice. I stay in the driver’s seat — every time.

Discipline as a Form of Respect
This process is sacred to me because it requires discipline. It forces me to slow down and respect the reader. AI supports the work, but it never replaces responsibility. My approach is guided by foundational principles such as transparency, fairness, accountability, and non-discrimination, all rooted in a philosophy that values critical thinking to ensure responsible use of AI throughout the process.
And AI isn’t the only safeguard. I also work with a human editor who reviews articles after publication and provides thoughtful feedback to sharpen the message further.
Machines can assist, but wisdom still comes from people.
Learning From Other Creators
I’m a fan of Ray William Johnson. His videos are wildly entertaining, and I’ve definitely lost an afternoon binge-watching them. In one video, he breaks down his creative process, and what stood out to me was the level of discipline behind it. While he’s been open about using AI to help generate visuals and supporting elements, the thinking, timing, and voice remain unmistakably his. AI helps him maintain consistency and scale, but it doesn’t replace creative judgment or intent.
That distinction matters. Using AI to amplify creativity is very different from using it to bypass understanding.

Promoting AI Literacy in Everyday Life
From the apps we use to the decisions organizations make that affect our lives, AI technologies are everywhere. Shaping the way we access information, communicate, and even think. But with this growing influence comes a responsibility: to ensure that everyone, not just experts, can navigate, question, and use AI tools wisely.
AI literacy is more than just knowing how to use the latest AI-powered app or asking ChatGPT for quick answers. It’s about understanding how AI systems work, what they can and can’t do, and how the data and algorithms behind them shape their outputs.
AI systems rely on diverse data sources to generate accurate and relevant outputs. At its core, AI literacy is about developing critical thinking skills, evaluating the information AI generates, identifying potential biases, and making informed decisions rather than simply accepting machine-generated answers at face value.
Ethics, Education, and the Role of Human Judgment
A crucial part of AI literacy is AI ethics. As we rely more on machine learning and generative AI to process data and generate insights, we must also consider the ethical implications of these technologies. An ethical AI framework isn’t just a set of rules. It’s a commitment to fairness and human oversight. Responsible AI means holding systems and their creators accountable, ensuring that AI tools support human values and do not perpetuate harm.
In education, the writing process is a prime example of where AI literacy matters. AI tools can help students and writers organize ideas, conduct research, and refine their work. But these tools should never replace the human ability to think critically, question sources, and develop original ideas. Instead, they should act as support. Helping users to sharpen their arguments, not simply copy-paste information without understanding.
Teaching students to use AI responsibly means emphasizing information literacy, ethical standards, and the ability to make sense of complex data. Applied sciences play a key role in helping students and educators understand AI, human cognition, and technological innovation, bridging the gap between theoretical knowledge and practical application.
Shared Responsibility and Real-World Impact
Promoting AI literacy is a shared responsibility. Educational institutions, research scientists, and society organizations all play a role in developing effective AI education programs. Research scientists, with their expertise in cognitive research, contribute to understanding human cognition and evaluating AI’s impact on learning and critical thinking.
Initiatives like those from the Stanford Institute and other research organizations are leading the way, focusing on building the knowledge and skills needed to thrive in a world shaped by AI. These programs prioritize not just technical know-how, but also the development of critical thinking, ethical reasoning, and responsible AI practices.
For future leaders, this literacy is a competitive advantage. AI excels at processing vast amounts of data and surfacing relevant information, but it cannot replace human judgment, creativity, or the ability to weigh context and values.
Of course, the rise of AI also brings new risks, such as the spread of misinformation, algorithmic bias, and the potential for job displacement. Addressing these challenges requires more than just technical solutions; it demands a culture of questioning, fact-checking, and even reflection. By strengthening information literacy and developing a healthy skepticism toward AI-generated content, we can help ensure that technology serves society, not the other way around.
In the end, promoting AI literacy is about giving everyone the tools to ask better questions, make informed decisions, and hold AI systems accountable. As artificial intelligence continues to evolve, our ability to think critically and stay curious will be the foundation for a future where technology truly benefits all.
By prioritizing AI literacy and responsible AI practices, we can ensure that these technologies augment and support human capabilities — never replace them. This is not just a challenge for educators or researchers, but for all of us. Together, by focusing on fairness, transparency, and the development of critical thinking skills, we can build a society where AI is a force for good and where every individual has the knowledge and confidence to thrive in an increasingly complex digital world.

Implementing AI
Successfully implementing AI in any industry is about more than just deploying the latest technology. It’s about building systems that reflect our values and priorities as a society. Responsible AI starts with designing AI algorithms and models that are fair and accountable. This means actively working to identify and eliminate biases, ensuring that AI systems are subject to human oversight, and holding both the technology and its creators accountable for impactful decisions.
Education and ongoing training are essential parts of this process. As AI tools become more prevalent, everyone needs to develop the skills to use AI critically. This includes understanding how AI systems work, where their data comes from, and how to interpret their outputs. Information literacy and the ability to think critically about AI-generated answers are crucial for making informed decisions and avoiding the pitfalls of “copy-paste” thinking.
Institutions like the Stanford Institute and other research organizations are leading the way in developing best practices for AI use, emphasizing ethical standards, human oversight, and continuous learning. By encouraging a culture of responsible development, we can ensure that AI tools support human creativity and decision-making rather than replace them. This approach not only builds trust in AI but also empowers individuals to use AI as a force for good, supporting the development of knowledge, ideas, and solutions that benefit society as a whole.
Ultimately, implementing AI is about more than technology — it’s about people, principles, and purpose. By focusing on fairness and the development of critical thinking skills, we can create a future where AI enhances our abilities, supports our goals, and helps us navigate an increasingly complex world with confidence and integrity.

AI as a Partner, Not a Replacement
When you think about AI in your own work and daily life, a better question than “What can AI do for me?” might be, “How can AI help me think more clearly and act more intentionally?”
AI works best when it sharpens who you are, not when it replaces the effort required to become better. By automating routine processes and handling data analysis, AI enables you to focus on more meaningful or strategic aspects of your work, allowing you to concentrate on higher-value tasks and impactful areas. The goal isn’t automation for its own sake. The goal is alignment, taking part in thinking, action, and responsibility.
Why This Matters
This matters because the tools we use don’t just make us faster; they shape how we think. When AI is treated as a shortcut instead of a support system, we don’t just save time. We slowly give up the mental effort required to question, interpret, and decide.
AI isn’t the problem. Unexamined reliance is. Over time, convenience can replace curiosity, and speed can crowd out understanding. That shift rarely feels dramatic. It happens gradually, through small, well-intentioned choices that remove friction—but also remove reflection.
A human-first approach to AI keeps judgment where it belongs. It ensures that responsibility, context, and meaning stay in human hands, while AI strengthens clarity, efficiency, and execution. Used this way, AI becomes a strategic partner, and thinking remains the skill that matters most.

Future Research Directions
The future of artificial intelligence is both exciting and, yes, even complex, with new research directions opening up possibilities across nearly every field. One of the most dynamic areas is the development of advanced machine learning that learns more efficiently and solves complex problems. Researchers are also exploring how AI can be integrated with other emerging technologies to create smarter, more secure systems.
Generative AI is another rapidly evolving frontier. These systems can create new content, pushing the boundaries of what machines can do. While generative AI offers incredible opportunities for creativity and innovation, it also raises significant concerns about bias, misinformation, and the ethical use of AI-generated content. Addressing these challenges requires ongoing research into responsible AI development and the creation of standards that prioritize fairness and human oversight.
Institutions like the Stanford Institute and other leading research organizations are at the forefront of these efforts, working to deepen our understanding of AI’s impact on society and to develop tools and frameworks that support responsible AI use. As AI technologies continue to advance, it’s essential to prioritize critical thinking skills and information literacy — not just for researchers and developers, but for everyone who interacts with AI tools.
In education, for example, AI can be a powerful tool for helping students develop critical thinking skills and engage with complex ideas. But it’s crucial to avoid over-reliance on AI tools and to emphasize the importance of human oversight and the ability to question and interpret information. The future of AI research will depend on our collective ability to balance innovation with responsibility, ensuring that AI is used to augment human skills and support the development of a fair society.
By focusing on ethics, responsible development, and the cultivation of critical thinking, we can guide the future of AI in a direction that benefits everyone, supporting education and strengthening the fabric of society as a whole.
Thanks for reading.

Frequently Asked Questions
Is this article anti-AI?
- No. I use AI as my assistant in my work and writing. This article isn’t about resisting AI — it’s about using it responsibly. AI is most effective when it supports human judgment, not when it replaces the effort required to think, interpret, and decide.
Can AI actually improve critical thinking?
- Yes, when it’s used intentionally. AI can surface blind spots, challenge assumptions, and help clarify ideas. But critical thinking still requires human engagement. AI can assist the process, but it can’t take responsibility for understanding or meaning.
Is it okay to use AI for writing and idea generation?
- For me personally, I say absolutely. The key is staying involved. When humans lead the thinking, and AI helps refine, pressure-test, or optimize the work, the result is stronger and authentic. Problems arise when AI is used to bypass thinking rather than enhance it.
What’s the risk of relying too heavily on AI?
- Over time, heavy reliance can weaken judgment. When speed replaces understanding, and convenience replaces curiosity, we lose the habit of questioning and interpreting information for ourselves. That erosion happens gradually, which is why it’s easy to miss.
What does “human-first AI” really mean?
- Human-first AI means people remain responsible for intent, context, and decisions, while AI supports efficiency, clarity, and execution. It’s about keeping thinking, accountability, and meaning in human hands while letting AI do what it does best... to assist.