Where's Your Line?
What industry rules on AI are telling us about what it means to be human.
🧠 This Week's Point: Where Humanity Stays Put
New York City released its first AI playbook for teachers this week. The guidelines for the city’s 1,600+ public schools begin by drawing some red lines on what AI is, and what it is not. Among other things, it says AIs are:
Computer systems that perform tasks usually requiring human thinking—like finding patterns, sorting information, making predictions, or creating content
But AI is not:
A thinking, reasoning, or conscious being—it does not understand meaning or exercise judgment the way people do
Other descriptors of what AI is: “a tool that can help”, and a “fast moving technology that requires strong rules”. What AI is not: “a replacement”, and “always accurate”.
The playbook gives teachers permission to use AI for, among other things:
Brainstorming lesson plans
Drafting emails
Handling scheduling and translation
What AI shouldn’t be used for, according to the playbook?
Assigning grades
Determining disciplinary action
Building specialized learning plans for students with disabilities
As large, critical, and complex fields like education are adapting (and/or struggling) to use and define use around AI tools, it has had me thinking about where we draw the line. Where AI is okay and where it is not says a lot about what we think of our own humanity: what is irreplaceable, what we value most about it.
In the case of NYC’s playbook, there is clearly a moral line around relationship and context. We’re saying that even if AI is technically able to do some of the things in the “no” list, that doesn’t mean it should. What we should have, the play book says, are human teachers who really know the student, are in relationship with them, and have the ability to read what is unsaid in a situation.
They need “adults who know them well enough to decide when A.I. belongs in their learning, and when it does not.” Note that is a distinct idea from, “adults who know A.I. well enough to decide when…”. AI literacy is important, but ultimately, it’s the human relationship that should come first.
Each industry is trying to find its lane around AI right now. In Hollywood, SAG-AFTRA has drawn its line around human identity. The union’s position is that every person has an “inalienable right to their name, voice, and likeness,” and any AI recreation of a performer requires explicit consent, fair compensation, and must be paid “on-scale” with a live performance in order to financially disincentivize the use of AI.
Last year, the union filed a charge against the producers of Fortnite for using AI to replicate James Earl Jones’ voice as Darth Vader without bargaining with the union. The principle is: even after an actor has died, they continue to have rights to their unique voice and performance.
The Writers Guild drew a different line around authorship, or we could say more broadly, creativity. Under their 2023 contract, AI cannot be a “writer”, and no text produced by generative AI can be considered “literary material”. Studios can’t require writers to use AI, and if they hand a writer AI-generated material, it doesn’t count as source material for determining credits in a production. A writer can choose to use AI as a tool, analogous to their pen of choice or screenwriting software like Final Draft, but the creative origin of a story must be human.
The rules for these two creative unions are of course a mix of what they see as a moral line around AI’s use for a member with a job (which might indirectly protect a job), and other rules are more about directly protecting those jobs from AI replacement. The former might seem to be a bit more direct on what we consider to be uniquely and critically human, and what gives us the uncanny valley ick. But the nature of work is uniquely human too; it reminds me of the Catholic social teaching concept of the dignity of work. Across traditions and disciplines, the value of work is a uniquely human experience.
AI policy activity has been increasing in mental health as well. Illinois now prohibits the use of AI to make independent therapeutic decisions, to directly interact with clients in any form of therapeutic communication, or to generate treatment plans without a licensed professional’s review. Ohio has proposed similar rules and bans AI from attempting to detect emotional or mental states. Florida’s rules allow AI only to be use for administrative and never for therapeutic tasks; it can be used to transcribe therapy sessions only with written, informed consent from the patient at least 24 hours in advance.
The lines here feel strikingly analogous to those in education. There have been some studies that show that therapy chatbots may have benefits, especially in some situations. But when it comes to a human therapist, the therapeutic relationship itself is what is so critical. The act of being known, heard, and seen by this licensed professional is what keeps that social contract intact.
In medicine more broadly, the line is similar but feels more specific to clinical judgment, maybe discernment, and transparency. New laws in Texas and California allow AI to be used to assist with diagnosis and treatment, but a human practitioner must personally review all AI-generated recommendations before any clinical decision is made. There are potential regulations in New York that would require AI chatbots to clearly disclose that they are not license medical professionals, and ensure patients know whether they’re talking to a human or bot. Some tools like OpenEvidence and Freed have become useful tools for doctors, supporting but not replacing expert judgment.
It’s hard to know whether these are the “right” lines in each industry, and naturally there will be plenty of controversy, infighting, and questions remaining as the tools and their use evolve. But I’ve found these lines being drawn as a useful insight into what we value as uniquely human in this moment:
relationship
context
identity
creativity
work
discernment
transparency
These lines are worth paying attention to, because they are a real-time record of what we believe makes us human, or at least the aspects of our humanity we’re choosing to safeguard in this moment.
Founder and CEO of NVIDIA Jensen Huang said that what he considers “smart” right now (in a human) is someone who can see around corners. The truth is, we’re not great at seeing around corners when it comes to such fast-moving, powerful technologies. As MIT professor Justin Reich mentioned in the NYT article about NYC schools, “historically, when we try to guess the best ways of using new technologies, we’re often wrong.”
Will AI go the way of the metaverse, or will we look back at AI detractors like those who thought the Internet was a passing fad? I think it will be closer to the latter, but plenty of AI is overhyped. Wherever we might land, I’m certain that embracing our humanity and using that as our North Star can lead us to a place where AI can be a supportive, even transformative, tool to amplify our humanity, not replace it.
🫀 The Human Bit
I got to spend some time this weekend with my family in a Science Center exhibit on cryptography — in the original meaning of the word: more old-school code-breaking, less BitCoin. While not ideal for all age’s attention spans, it was a fun callback to my early career and a chance to share with my daughter that human ingenuity made it possible to save the world using math.
Stay human ✌🏼
Emily





I am glad to see that the NYCity school system has an AI playbook. I wonder if more schools will follow… hopefully, the answer will be YES.
Thank You Emily, this is very helpful. I especially appreciated the distinctions of the Hollywood unions.