“AI and Business”

a woman using a laptop

Better Prompts, Better Results: Why the Prompt Is the SOP for Small Business AI

A business owner told me recently, “Mike, I tried AI—garbage. It doesn’t work.”

I asked what they typed.

“Write me a marketing email.”

That right there is the problem.

Most people don’t get bad results from AI because the technology is broken. They get bad results because they gave vague instructions and expected a specific outcome. In operations, we already know how this movie ends: unclear inputs create inconsistent outputs, rework, and frustration.

That’s why I teach this simple idea:

The Prompt is The SOP

If you treat your prompt like a real work instruction—clear, specific, structured—AI becomes useful fast. If you treat it like a wish, you’ll get wishful output.

This post breaks down a beginner-friendly prompt framework. It gives you copy/paste examples for marketing, customer support, and process documentation. This way, you can get real value from AI without needing to be “technical.”

Why prompts matter (more than the model)

AI is closer to a fast intern than a magic employee.

A fast intern can be incredible. But only when you give them a strong brief:

  • What are we trying to accomplish?
  • Who is it for?
  • What does “good” look like?
  • What are the rules?

When you skip those pieces, you get generic work. And generic work doesn’t move the business forward—it just creates more editing.

Small business owners don’t have time for that.

The new episode of “Mike Schiano In the Queue” covers this topic where you get your Podcasts.

Instead of asking AI to “help,” you give it a task. You assign it just like you would assign work to a team member.

The simple prompt framework: Task, Context, Format (plus two upgrades)

If you want one framework you can remember and teach your team, use this:

  1. Task – What you want done (the outcome, not the topic)
  2. Context – The details that matter (business, customer, offer, constraints)
  3. Format – What “done” looks like (email, checklist, table, script, etc.)

Then add two upgrades that dramatically improve consistency:

  1. Role – Who the AI should act as (support manager, ops leader, marketing copywriter)
  2. Constraints – The rules (tone, length, what to avoid, must-include items)

That’s prompt engineering in plain English. No jargon required.

Example 1: “Write a marketing email” (and why it fails)

Let’s start with the most common prompt I see:

“Write a marketing email for my business.”

The issue isn’t that the AI can’t write. The issue is that it has nothing to anchor to. There is no goal, no audience, no differentiators, no offer details, and no tone boundaries.

Here’s the upgraded version that actually performs:

Copy/paste prompt (Marketing Email):
You are a small business copywriter.
Task: Write a marketing email that drives phone calls (or bookings).
Context: My business is [type], we serve [city/area]. Best customers are [persona]. Our differentiators: [3 bullets]. Offer: [what you’re promoting]. Common objections: [price/timing/trust/etc.].
Format: Subject line + preview text + email body + P.S.
Constraints: Under 180 words, warm and confident, no hype, one clear call-to-action to call or book.

Notice what changed: we stopped asking for “content” and started giving a brief.

That’s the difference between AI that “sort of helps” and AI that produces something you can actually send.

Example 2: Customer support that de-escalates (without creating risk)

Another place small businesses get burned is customer support.

Someone sends a heated message:
“This is defective. I’m telling everyone. This is unacceptable.”

If you prompt casually, AI might accidentally:

  • admit liability,
  • make promises you can’t keep,
  • or escalate the tone.

Instead, you want a response that’s calm, clear, and resolution-focused—while protecting the business.

Copy/paste prompt (Support Reply):
You are a customer experience manager. Draft a response to this customer message: [paste message].
Goals: De-escalate, protect the brand, move to resolution.
Format: Email reply only.
Constraints: Be empathetic. Do not admit legal liability. Offer two options (refund/replace OR troubleshooting + call). Keep the response under 120 words. End with one question to move forward.

That prompt is basically a support policy encoded in writing. Again: prompt = SOP.

Example 3: Build SOPs faster (where operators win)

This is the part that excites me most as an operations person.

Small businesses can use AI to create the first draft of SOPs, checklists, QA rubrics, and training guides—fast. Not as a replacement for leadership, but as a speed multiplier.

Let’s say you need a simple procedure for handling customer no-shows.

Copy/paste prompt (SOP Builder):
You are an operations leader. Create an SOP for: “Handling customer no-shows.”
Audience: New hire on day 3.
Context: We schedule 30-minute appointments. We allow a 10-minute grace period. We confirm via text. We reschedule once without fee. The tone should be polite but firm.
Format: Purpose, trigger, numbered steps, exception handling, and a QA checklist.
Constraints: 8th-grade reading level; keep it practical and brief.

This is how AI stops being a novelty and becomes part of your operating system.

small business owner
Pexels.com

Three ways to look like the adult in the room with AI

If you want AI to be useful across a team—not just in your own browser—do these three things.

1) Create a prompt library

A “prompt library” is just a shared doc with your best prompts for repeatable work:

  • sales follow-ups,
  • customer responses,
  • SOP templates,
  • hiring scorecards,
  • meeting notes → action plans.

This turns one person’s experimentation into a company asset.

2) Use simple structure so nothing gets missed

Headings like:

  • TASK:
  • CONTEXT:
  • FORMAT:
  • CONSTRAINTS:

…make prompts easier to reuse and easier for AI to follow. It also makes it easier for a team member to review and improve.

3) Add a QA-style self-check

This is a pro move that’s still simple:

“Before finalizing, verify you included [X], avoided [Y], and matched tone [Z]. Then rewrite the final.”

Operators understand checklists. AI responds well to them. It’s a natural fit.

The real goal: predictable output, not “cool AI”

Most businesses don’t need AI that’s flashy.

They need AI that’s consistent.

When you standardize prompts the way you standardize processes, you reduce rework and get repeatable quality. That’s what creates ROI. And that’s why prompting isn’t a “tech skill”—it’s a management skill.

Try this today (10 minutes)

Pick one recurring task you do every week—just one:

  • writing a customer reply,
  • drafting a marketing email,
  • turning notes into actions,
  • creating a checklist,
  • rewriting a web page section.

Rewrite your prompt using:
Task + Context + Format
Then add:
Role + Constraints

Run it three times, tighten it, and save it to your prompt library.

That’s how you start building an AI-ready business without hiring a giant team or buying a complicated platform.

Want a prompt upgrade?

If you tell me the exact task you want AI to help with, I’ll suggest the single best “missing piece” to add to your prompt. Specify whether it’s marketing, customer support, hiring, SOPs, or meeting follow-ups. This will improve the output.

Trusting AI In the Queue

Why We Trust Flawed AI—and Why That’s the Real Danger

Why We Trust Flawed AI—and Why That’s the Real Danger
By Mike Schiano, AI Strategist, Author, Podcast Host

That’s the question explored in Rachel R. Rosner’s provocative article, The Allure of Flawed AI: Trusting the Machine, written for The Times of Israel Blog. Her insight? Our trust in AI isn’t just about convenience—it’s a psychological and cultural habit decades in the making.

A New Tech, an Old Pattern

Rosner connects today’s uncritical trust in AI to the theories of the Frankfurt School. This group consisted of mid-20th-century philosophers like Theodor Adorno and Max Horkheimer. They warned that mass media (think radio, TV, and advertising) didn’t just entertain—it trained us to accept appearances as truth. When something “looked and sounded” authoritative, we stopped asking if it was. I wrote about this phenomenon in 2005 in a paper detailing how advertising is a key driver of consumer debt.

Fast-forward to 2025: AI tools like ChatGPT speak fluently, remember your tone, and respond instantly. They sound like they know what they’re talking about. And for many users, that’s enough. Fluency now mimics trust. Confidence gets mistaken for credibility.

From Experience to Authority

Rosner points out a subtle danger: “The accuracy of the content becomes secondary to the experience of being guided.” That’s a massive shift. We’ve moved from evaluating what is being said to valuing how it’s being said.

Even when AI gets it wrong (and sometimes dangerously wrong, as in the case of xAI’s Grok making antisemitic statements), we continue to rely on it—especially in times of uncertainty. Why? Because the machine feels stable, consistent, and reliable—even when it’s objectively not.

The Real Threat Isn’t AI. It’s Us.

Rosner’s argument is chilling in its clarity: “The real concern is not whether AI will replace human reason. The real danger is that AI will train us to stop asking whether it should.”

In other words, the more we let AI think for us, the less we think about it.

This isn’t a call to panic—it’s a call to awareness. AI is here to stay. But we can’t afford to surrender our critical thinking to the fluency of machines. We need to question, verify, and stay curious—especially when the answers come in a confident tone.

Trusting AI blindly is easy. Questioning it is harder—but far more important. The future doesn’t belong to the most advanced algorithms. It belongs to the humans who know when to doubt them.

Rachel R. Rosner is an American, Israel-based philosopher, writer, and junior fellow at The Van Leer Jerusalem Institute. She recently completed her PhD in philosophy and writes on antisemitism, memory, identity, and critical theory. Her forthcoming book, Adorno and the Question of Theology: Religion and Reason Beyond Foundations (Bloomsbury). Read more of her work.

Tune in to In the Queue for more on this topic.

Texas Takes the Lead: New AI Consumer Protections Enacted

Texas Takes the Lead: New AI Consumer Protections Enacted

In this week’s episode of the In the Queue Podcast, where we delve into the intersections of technology, finance, and the evolving job market, we spotlight a significant development out of Texas that’s making waves in the realm of artificial intelligence and consumer rights.

On June 22, 2025, Texas Governor Greg Abbott signed House Bill 149 into law. It is officially known as the Texas Responsible Artificial Intelligence Governance Act, or TRAIGA. This landmark legislation positions Texas at the forefront of AI regulation in the United States.

TRAIGA sets forth comprehensive guidelines to ensure that AI technologies are developed and deployed responsibly. Key provisions include:

  • Prohibition of Harmful AI Practices: The law bans AI systems that intentionally discriminate, promote self-harm, or encourage criminal behavior.
  • Restrictions on Government Use: Government entities are barred from using AI to assign social scores based on personal characteristics or behaviors. Additionally, deploying AI for biometric identification without individual consent is prohibited.
  • Protection of Constitutional Rights: AI systems designed solely to infringe upon constitutional rights or unlawfully discriminate against protected classes are expressly forbidden.

Encouraging Innovation with Oversight

Understanding the importance of fostering innovation, TRAIGA introduces a regulatory sandbox program. This initiative allows companies to test new AI systems without immediate regulatory repercussions, provided they obtain approval from the Texas Department of Information Resources and relevant agencies.

To oversee these efforts, the law establishes the Texas Artificial Intelligence Council. This body will monitor compliance and support the responsible advancement of AI technologies within the state.

Enforcement and Implications

Enforcement of TRAIGA falls under the exclusive authority of the Texas Attorney General. Violations can result in civil penalties of up to $100,000 per incident. Notably, the law specifies that enforcement actions cannot be taken against AI systems that have not been deployed.

For federally insured financial institutions, compliance with existing federal and state banking laws is deemed sufficient under TRAIGA, providing clarity and continuity for these entities.

Broader Impact and Future Outlook

TRAIGA’s enactment marks a significant step in balancing the rapid advancement of AI technologies with the imperative to protect individual rights and societal values. As AI continues to permeate various sectors, from finance to healthcare, such legislation serves as a blueprint for responsible innovation.

Other states and federal entities may look to Texas’s approach as a model for crafting their own AI governance frameworks.

Other States Protecting Consumers from AI

Several states have taken meaningful steps similar to Texas to protect consumers from AI-related risks. They focus on algorithmic discrimination, transparency, risk assessments, and enforcement. Here is an overview of leading state actions:

Colorado

  • Colorado AI Act (SB 24-205): Enacted in May 2024, this is viewed as the most comprehensive state law to date. It regulates “high-risk” AI systems, requiring developers and deployers to use reasonable care to avoid algorithmic discrimination, particularly in consequential decisions related to education, employment, finance, healthcare, housing, insurance, and legal services.
  • The law empowers the attorney general to enforce penalties for violations.

California

  • Consumer Privacy Laws: California has two major privacy laws with AI provisions. The state’s consumer privacy law grants residents the right to opt out of AI-driven profiling that impacts employment, insurance, health, or other outcomes.
  • California AI Transparency Act (SB 942): Effective January 2026, this law requires providers of widely used AI systems to disclose automatically generated content, with significant penalties for noncompliance. The state also prohibits using bots to incentivize sales without disclosure.

Utah

  • The Utah Artificial Intelligence Policy Act mandates consumer disclosure for AI use cases, such as chatbots, impacting consumers’ awareness and protection when interacting with AI systems. Recent amendments extended its effect and focused the requirements through 2027.

New Jersey

  • SB 332: Enacted in January 2024, this law requires companies to notify consumers and allow them to opt out when personal data is collected and used for automated decisions. The law prohibits use or processing of personal data in a discriminatory manner

Illinois

  • Workplace Legislation: In August 2024, a law was enacted barring employers from using AI that considers an applicant’s race or zip code in hiring decisions. Additional proposed bills would require impact assessments on automated decision-making affecting employment, education, and housing, and reporting those assessments to state authorities.

Connecticut

  • Connecticut has regulated government AI procurement and use since 2023 and is expected to soon expand protections into the private sector

Massachusetts, New Mexico, Vermont, Virginia, Georgia, Hawaii

  • These states have introduced—and in some cases advanced—legislation requiring risk management, impact assessments, and prohibiting certain forms of algorithmic discrimination. Massachusetts’ and New Mexico’s pending acts closely mirror Colorado’s risk-based approach; Vermont’s proposal focuses on high-risk systems and transparency; Virginia passed a comprehensive bill through its legislature (though it was vetoed in 2024), and Georgia, Hawaii, and others are considering similar proposals.

Other Leading States with AI Consumer Protections

StateKey ProvisionsStatus
ColoradoBroad ban on AI discrimination in critical sectors; penalties for violationsEnacted
CaliforniaConsumer opt-out for profiling; transparency for AI-generated contentEnacted
UtahDisclosure mandates; consumer notifications for AI useEnacted
New JerseyOpt-out for automated data use; anti-discrimination in data processingEnacted
IllinoisBan on race/zip in AI hiring; proposed risk and impact assessmentsEnacted/Proposed
ConnecticutAI safeguards in government procurement; broader protections pendingEnacted
MassachusettsRisk management/disclosure for high-risk AIProposed
New MexicoRisk-based regulation for AI, similar to COProposed
VermontTransparency; anti-discrimination for high-risk AIProposed
VirginiaComprehensive protections (vetoed 2024, may return in future)Proposed

Key Trends

  • Comprehensive Laws: Colorado and California set the national standard for broad, cross-sectoral protections.
  • Sector Focus: Employment, insurance, lending, and healthcare are commonly prioritized.
  • Opt-Out and Transparency: Many states require consumer notification, opt-out mechanisms, and clear disclosures.
  • Enforcement: Most laws or proposals grant enforcement power to state attorneys general, often with significant penalties for violations.

Texas is part of a rapidly expanding movement among states to regulate AI for consumer protection, with many adopting similar frameworks against discrimination, mandating risk assessment, ensuring transparency, and empowering consumers with actionable rights

References for this article

  1. https://www.lexisnexis.com/community/insights/legal/capitol-journal/b/state-net/posts/states-passing-laws-to-prevent-ai-discrimination-in-workplace  
  2. https://www.brookings.edu/articles/states-are-legislating-ai-but-a-moratorium-could-stall-their-progress/  
  3. https://www.mondaq.com/unitedstates/new-technology/1578160/guidance-on-managing-the-risks-of-ai-discrimination    
  4. https://www.goodwinlaw.com/en/insights/publications/2024/09/insights-technology-aiml-how-states-are-stepping-in-to-regulate-ai 
  5. https://www.whitecase.com/insight-our-thinking/ai-watch-global-regulatory-tracker-united-states
  6. https://www.legaldive.com/news/16-states-have-ai-laws-curb-profiling-BCLP-interactive-compilation-state-AI-laws/710878/
  7. https://www.fisherphillips.com/en/news-insights/congressional-republicans-propose-10-year-ban-on-state-ai-laws-what-it-could-mean-for-employers.html 
  8. https://www.varnumlaw.com/insights/state-level-ai-regulations-enacted-in-2024/ 
  9. https://natlawreview.com/article/artificial-intelligence-ai-employment-discrimination-laws-proposed-six-states-what 
  10. https://www.bclplaw.com/en-US/events-insights-news/overview-of-us-state-legislative-bills-on-ai-in-2023.html
  11. https://www.hinshawlaw.com/newsroom-updates-pcad-state-ai-laws-what-businesses-must-know.html
  12. https://www.workforcebulletin.com/states-ring-in-the-new-year-with-proposed-ai-legislation 
  13. https://www.insideprivacy.com/artificial-intelligence/blog-post-state-legislatures-consider-new-wave-of-2025-ai-legislation/
Pope Leo speaks on AI

Pope Leo is concerned with AI

by Michael Schiano – Operations Executive | AI & Workforce Strategist | Host of “Mike About Money and “In the Queue,“ Podcasts.

Congratulations to Pope Leo XIV. We are excited for his leadership and guidance.

In his first official address the new Pope included his thoughts on what he called another Industrial Revolution that is taking place in the field of Artificial Intelligence. He said, AI poses “new challenges for the defense of human dignity, justice, and labor.” Read Emma Bubola’s story about Pope Leo’s speech in the NY Times.

Though the Pope may not have read my new book yet, AI is Coming for Your Job, I am glad to be on the leading edge of what the Pope sees as a social question that the Catholic Church will weigh in on.

It may surprise some that the Pope is thinking about AI and its impact on workers. But it shows the serious threat that this new technology brings to the world when the leader of the Catholic Church would feel compelled to mention it in his initial message.

Some describe AI as the greatest threat to mankind while others describe it as the greatest breakthrough. As with any power, how it is deployed is key. Will it be used for good? Undoubtedly, and it is already being used to make businesses more efficient and save lives. Will it be used for evil purposes? Absolutely, and it is already being used by criminals.

The gold rush to leverage AI in every way possible continues to grow each day. In this hurry, workers are already feeling the pressure on their jobs. As reported by Bloomberg and other media, AI is in the process of replacing more than 50% of the tasks performed by market research analysts and 67% of tasks performed by sales representatives.

In AI is Coming for Your Job, What you can do to Survive and Thrive, workers in all industries will find action steps and resources they can take immediately to protect their jobs, careers and income from the inevitable impact of Artificial Intelligence.

What do you think?

Roger Hooks

Future of Creative Jobs: Roger Hooks on Why AI Won’t Replace You — Unless You Let It

By Michael Schiano | Featuring insights from the In the Queue Podcast with Mike Schiano

What happens to creative jobs in a world run by Artificial Intelligence?
That’s the question on everyone’s mind — and Roger Hooks, Creative Director at Super Micro, has a bold answer:

“You won’t lose your job to AI. You’ll lose your job to someone who knows how to use it.”

In the latest episode of In the Queue with Mike Schiano, Hooks shares powerful, real-world insights from over 36 years in the creative industry. He’s worked with giants like Apple, Intel, Nvidia, and Universal Music. Whether you’re a business owner or a creative professional, his message is clear: AI is a tool, not a threat. But only if you choose to learn it.

AI Doesn’t Replace Creativity — It Speeds It Up

According to Hooks, AI is like a digital assistant that helps you brainstorm faster, sketch quicker, and develop ideas for review much more efficiently.

“AI is more of a development tool — not the end result.”

For example, his team uses AI to create “rough comps” in video production and still images. This allows management teams to visualize concepts earlier in the creative process, making it easier to secure approvals and move projects forward.

Tip for Small Businesses:
Hooks recommends starting small with AI. Use it for idea generation and concept development before investing in more advanced tools.

The Barrier to Entry is Lower — But So Is the Filter

AI is lowering the barriers for entering creative fields. While that means more opportunities, it also means more competition.

“The barrier of entry is lower. So, there will be more people there.”

When everyone has access to powerful tools, your portfolio becomes your strongest weapon.
Hooks says that where you went to school or where you are from doesn’t matter. What matters is how well you can demonstrate your skills to a potential employer or client.

Can You Articulate Your Ideas?

One of the most important skills for modern creatives, according to Hooks, is articulation — the ability to clearly explain your ideas to clients, managers, and even AI tools.

“The one that can articulate their idea the best is the one who’s going to get more ideas across.”

Whether you’re pitching a project or designing a marketing campaign, strong communication is key to success.

Creativity Isn’t Dying — It’s Evolving

According to Roger, artificial Intelligence isn’t here to replace creative professionals. It’s here to reshape the way they work. The future belongs to those who learn the tools, keep the human touch, and continue to innovate.

Want more insights like this?
👉 Follow Mike Schiano and subscribe to In the Queue Podcast for expert advice on navigating the new world of work.

Mike’s new book, AI is Coming for Your Job, What to do to Survive and Thrive is available as an eBook or paperback where you get your books.

Artificial Intelligence is no longer the Future for Contact Centers

iStock_analytics122334673.jpg.800x600_q96AI is already here and is being implemented by companies across the world.

Call ATT about your Cell service and you will be talking to an Artificially Intelligent agent. This is a computer generated person talking to you. Not a real agent. And that robot is verifying your identity and answering questions in a conversational manner. The quality is stunning. No button pushing required. You can answer questions and the system understands you and either answers or directs you to where you need to go to get help. This is a big advance from the days of a robotic voice chopping its way through some basic commands and asking you to press 1 for yes and 2 for no.

Surveys show consumers like getting answers, information and service without having to deal with other humans. Great news for company bottom lines…not so good for Call Center agents across the world.

Dozens of companies like SmartAction, Afiniti, and IPsoftAmelia, to name a few, are developing AI applications to integrate with Contact Centers that are capable of totally disrupting operations in a good way. These companies are moving quickly and producing some very impressive products that will transform omni-channel Customer Service.

Google, Amazon, Apple and Tesla are pushing AI to new frontiers and the residual learning and technology wave is raising all ships and leading to amazing breakthroughs in quality.

The programs that are being developed are so sophisticated that they are learning on their own and improving their performance. This “machine learning” is the key leg to explosive growth. AI is providing insight on customers and building profiles using interactions, call audio and other contact data that are compiled from multiple sources and can be made available in real-time or for historical reporting. Sales and Service teams will have access to amazingly detailed and useful data about their agents and customers.

The what, when, where, and how of AI should be a key focus for all Contact Center leaders and those who utilize contact centers to service their customers. The “Why” is easy. Because AI leads to increased cost savings, more loyal customers, more buyers, longer retention of customers and employees (the human ones); reductions to infrastucture spending; better use of collected customer data, and, unfortunately for workers but good for company bottom lines, reductions in human workforce costs.

I spoke to a Venture Capital Group today and I can tell you that they are taking a very conservative approach to valuing the ROI on AI implementation. AI seems to have snuck up on everyone quietly, event the smart money, but it is here and it is growing fast.

The Contact Center industry is beginning the journey through major changes and improvements with AI and I’m very excited to be along for the ride.