Technology Essentials in Education Episode 12:
AI Ethics in Action
Host: Monica Burns
Mar 13, 2026
About the Episode
Technology Essentials in Education is your go-to podcast for practical insights on using technology to simplify your school week. Hosted by author and educator Monica Burns, Ed.D., in partnership with Jotform, this series is designed for K-12 educators, administrators, and leaders looking to make a meaningful impact. In this episode, Monica chats with Nasser Jones, founder of the nonprofit Bending the AI Curve, who shares his mission to ensure the future of technology is equitable for everyone. Together, they explore how to move beyond a reactive "policing" mindset to find proactive, strategic solutions that bridge the AI gap and prepare all students for the digital age. They discuss why minimizing "tool fragmentation" is essential for reducing teacher burnout, how to use agentic AI to identify and surface bias in school data, and how the AI Academy is empowering communities of color through culturally relevant tech resources.
Hello there, my name is Monica Burns and welcome to Technology Essentials and Education.
Today's episode is titled AI Ethics in Action, Bending the Curve, the AI Curve with Nasser Jones.
In today's episode, I'm joined by Nasser Jones, who is the founder of the nonprofit Bending the AI Curve.
Nasser is on a mission to ensure that the future of technology is equitable for everyone.
In today's episode, we dive into this critical world of AI ethics and talk about what it means to truly prepare all students for this age of AI.
Lots of great tips, strategies, and gems in this conversation. Let's get into it.
This episode is brought to you by Jotform. Jotform provides an all-in-one solution to streamline administrative tasks, enhance community engagement, and foster innovation.
Using their no-code drag-and-drop forms and workflows, your teams can securely collect and store data, automate tasks, and collaborate on team resources.
Educational institutions are also eligible for a 30% discount on Jotform Enterprise. Head to their website to learn more at jotform.com slash enterprise slash education.
Welcome to the podcast. I am so excited to chat with you today about a really big and very important topic of AI ethics and what this means for folks inside and outside of education.
Before we get into all of that together, can you share a bit about your role? What does your day-to-day look like?
Yeah, my name is Nasser Jones, and thank you for having me on this podcast, Doc. I really appreciate it. I'm excited about being here, of course, being with a fellow educator.
My day-to-day job is I build custom apps for corporate, nonprofit, and educational institutions, and I also run a national nonprofit organization called Bend the AI Curve.
Amazing. Those are all things I want to get into in our conversation today.
You mentioned bending the AI curve and that's kind of where I want to start with our conversation.
Today we're talking about AI ethics and that term you use, bending the AI curve. What problem or issue are you referring to there?
Literally nine months ago, I was out of business. I spent the last two and a half years telling people AI is coming for some jobs and maybe replacing some businesses.
I had a full service marketing firm serving educational organizations, college access professionals, and such.
My clients would come to me and say, I just built a new website on Wix or a great brochure on Canva, great for them but not great for my bank account.
I've been in the AI space since day one, since we launched on OpenAI or ChatGPT and knew what we had on our hands.
Nine months ago, I started pivoting into the AI space seriously and started looking at what I could get ahead of.
Every time I would get ahead, OpenAI or Claude would come up with something new.
About six months ago, I started reading eight newsletters every single day, mostly around nonprofits, education, and small business.
I started stumbling on data coming out of MIT and Stanford that gave me a disturbing trend.
I got ahead of something called vibe coding, which is AI to help you build apps, and I built a whole app store.
I was going to go down the traditional route of building software, getting it funded, and running ads, but the data showed a gap happening in the AI space.
For some demographics in the United States, they've spent two hundred and fifty three billion dollars this year and the growth is exponential.
But for groups that have always had gaps, like rural communities, indigenous veterans, their curve is flatlining or going down.
We have to address this gap or we won't build the people power to take on the jobs that private investment is creating.
That phrase bending the AI curve is such a great visual for people trying to understand this concept.
When you hear AI ethics, what are the first two or three ethical questions you want decision makers to ask before adopting AI tools?
What I'm seeing in education is a reactionary response to AI ethics or policies, often harassing IT departments as if it's just a technical issue.
The first question is, am I just being reactionary to what I hear in the marketplace?
Second, we need to be thoughtful about AI ethics and make sure it applies across the board, because you can't say students shouldn't use AI but teachers and administrators can.
That creates an adversarial relationship between teachers and students, and I don't want teachers being AI policemen in the classroom.
Third, there are ethical concerns, but AI as a tool can cause damage or bring far more good.
We need to be thoughtful, strategic, proactive, and invite more constituents into the conversation, including board members, administrators, teachers, parents, and the community.
You need skeptics and enthusiasts at the table; everyone along the continuum is worthy to have the conversation.
I encourage people to hold a healthy hesitation, which is valuable when interacting with something new.
The ability to be proactive, transparent, and come out in front of something is important, like in the teacher-student example you shared.
These are layers that school leaders, educators, and community members can keep in mind when deciding what's right for them.
Where do you see bias show up most often in AI products? Is it data, model, user experience, or how results get used?
Some of it is really the data we start with. In education, we've seen bias in student discipline, course recommendations, and placements.
If your data is biased from the beginning, it will be flawed in how you use it.
The beautiful part about AI today is you can use AI to analyze if your data is biased and how that bias might show up in curriculum, discipline, or hiring.
You can use AI to develop a plan and execute it, and even if you fail forward, AI can adjust.
We're moving past call and response where you put out a call and get a response; you need to approach AI agentically, solving problems from multiple ways.
Today, you can have your AI flag things in real time, like discipline problems early in the semester, to provide interventions for students and coaching for teachers.
If you start with flawed data or approach, it's not because you're doing anything wrong; the technology is new.
We're creating a reverse problem where AI should help us work efficiently, but technical teams scramble to find solutions, leading to fragmentation.
Now, a school district might have five different AI solutions, which means teachers have to learn multiple tools, creating complexity.
You should only be using two tools: a primary large language model like Gemini, OpenAI, or Claude, and maybe one additional helpful tool.
Using more than three tools adds unnecessary complexity; you need someone to help compress the learning curve for your constituents.
It's important to factor in time and energy to learn tools, not just dollars, because switching between many tools can be overwhelming.
Teachers have been traumatized over the last 25 years with constant changes and added layers to their work, making it less fun.
Teachers are overwhelmed, and now AI is another new thing being added, which can feel like too much.
We've traumatized a generation of teachers, and I want to help solve that.
When talking to educators, I emphasize that AI tools might eventually help take things off their plate rather than add more.
Jotform lets you build forms quickly for surveys, homework submissions, quizzes, and more, with templates designed for educators.
Educational institutions can get a 30% discount on Jotform Enterprise at jotform.com slash enterprise slash education.
You alluded to the number of tools someone might have access to, but what are the biggest barriers to meaningful AI participation?
Is it device access, connectivity, language barriers, trust, time, or something else?
One thing I talk about is the Slack in your organization that prevents you from moving forward.
Organizations drift toward complexity, not simplicity.
Often, policies prevent you from taking on tools that could be helpful.
Six months ago, conversations about privacy and giving information to AI systems were a big hesitation.
Wading through policies and figuring out how to approach AI is a serious challenge that requires board conversations.
Other impediments include practical things like curriculum design and discipline policy, and how AI can help compress roadblocks to optimization.
Working with schools, I see some people pushing back or making decisions about what they're comfortable bringing into their communities.
Shifting to student interactions, what does a developmentally appropriate conversation about AI ethics, bias, or automation look like in classrooms?
With vibe coding and creating apps, teachers should think about creating apps to help students work more efficiently.
We know students are using AI because teachers and others use it, so the question is what's appropriate and ethical use.
Are students just using AI to get the quickest result, or is there more to it?
In classrooms, I would help students understand the process and add layers to the conversation about AI use.
For example, if a student generates work with AI, they could create a video explaining their prompt and response.
I'm working on AI technology to help students better maintain their voice in writing.
We walk students through waypoints, like the old Oregon Trail, asking questions about their prompt and research sources.
I also use social-emotional learning to understand where students are coming from and how that affects their writing.
I want students to go through the whole process, including punctuation and writing quality.
Using agentic AI, the system constantly makes recommendations tailored to the student's needs.
Even skeptics can see how AI can precisely target students based on performance, ESL status, and academic goals.
I can fine-tune AI to work on specific district goals and state standards to help students progress more quickly.
These are basic things teachers could use today, and we could drill down much more if time allowed.
This is perfect for listeners wrapping their heads around what's possible and thinking about the rest of the school year.
Before we finish, tell us about the AI Academy, your mission, and where people can connect with you.
Bend the AI Curve is my ethos that AI should be for all. I've created an app store with 70 free apps to remove roadblocks.
You can bend the AI curve where you are by using the app store or setting up community computer labs for access.
Many Latino families only have one smart device, and districts often don't send smart devices home, creating access challenges.
You can bend the AI curve by downloading the playbook for your demographic.
I'm launching a 10-city tour starting in Philly, going to cities with the most need, including Chester, Wilmington, Baltimore, D.C., Northern Virginia, Atlanta, Chicago, Detroit, Vegas, Phoenix, and Southern California.
We're launching the AI Academy for students of color because these students face particular challenges in these cities.
We'll do a week of AI conversations in each city to begin necessary conversations that are not just for nerds.
I also have my university, where veterans teach veterans how to prompt and get comfortable with technology, with culturally relevant content.
Content is tailored for reservations, indigenous people, and others based on their place in the world and how technology can help them.
You can bend the AI curve in your district, classroom, or community by implementing policies and training.
If you need help, when I come to do training, we focus on your biggest district problems and walk through solutions together.
Before leaving, you'll have a roadmap to solve problems and a framework for agentic AI use.
My goal is to help you get from 10% to 30, 40, or 50% utility from your AI tools rather than adding complexity.
You can find me online, connect with my team, and I'm happy to help your organization regardless of financial situation because AI should be for all.
Thank you for your time, energy, fantastic strategies, and resources for educators. We'll link to all of that so people can find and connect with you.
It was lots of fun chatting with Nasser Jones today.
Let's make this EdTech easy with a few key points from the episode.
The AI curve is widening with some communities accelerating while others risk falling further behind.
Schools often respond reactively to AI ethics instead of taking a strategic, inclusive, proactive approach.
Bias data leads to bias outcomes, and AI can help analyze and surface inequities in curriculum, discipline, or hiring.
Too many AI tools create fragmentation, and most districts might need only a primary language model and possibly one additional tool.
Thanks so much for joining today's episode. Make sure to check out the resources to learn more about Nasser Jones' work.
A big thank you to Jotform, the presenter of today's episode. To learn more and get a 30% discount on Jotform Enterprise, head to jotform.com slash enterprise slash education.
