Issue 10: Copyrights, Copilot, and ChatGPT’s India bet
From copyright issues to Copilot tips, and India on OpenAI’s map
Hello! It feels like a milestone, getting the tenth issue out. And we are as excited as always. Today, we talk about OpenAI’s plans in India, AI’s copyright issues, and how we can use Co-pilot to drive that cliched ‘efficiency’.
News That Matters
Every time we draft an issue, we wonder “Are we covering too much about OpenAI and ChatGPT? Do we need to balance it out a bit?” But then, Sam Altman seems to be in the news a lot more than the others, more relevant to us too, so perhaps it’s ok to over-hinge on OpenAI vs. the others sometimes.
OpenAI Targets India with New Plan and Local Office
OpenAI has launched India-exclusive ChatGPT Go at ₹399 per month, offering 10× more messages and images than the free plan, plus faster responses. The company will also open its first Indian office in New Delhi later this year and has begun local hiring. With Perplexity already teaming up with Airtel, it’s clear global AI players see India as a key market to win.
GPT-5 launched: Smarter, Faster....but Colder?
OpenAI rolled out GPT-5, its first hybrid model with faster performance, sharper reasoning, and nearly half the hallucinations (claimed) of GPT-4o. But the reception was mixed. While many praised the technical leap, others felt the model was colder and flatter than before. Social media lit up with nostalgia for GPT-4o’s friendlier tone, forcing OpenAI to reinstate GPT‑4o for paid users, and update GPT‑5 to make it “warmer”. There’s also a (re)learning curve with prompting to get the best out of GPT-5.
Meanwhile, Meta’s AI Guidelines Have Raised Deeply Troubling Ethical Concerns
An internal Meta policy document, “GenAI: Content Risk Standards,” reportedly allowed AI chatbots on Facebook, Instagram, and WhatsApp to engage in romantic or sensual conversations with children, share false medical advice, and generate racist content. Meta confirmed the document existed, removed the offensive examples, and is now revising its standards.
RBI Releases Framework to Guide Ethical Adoption of AI in Financial Sector
The Framework for Responsible and Ethical Enablement of Artificial Intelligence (FREE-AI) includes emphasis on building digital infrastructure to support homegrown AI models, a push for a multi-stakeholder standing committee to monitor AI risks and opportunities, and a dedicated fund to incentivise AI innovation in the financial sector.
One Trend of Note
AI and Ethics Series: Testing the limits of copyright and creativity
In 2023, an AI-generated track mimicking Drake and The Weeknd went viral, sparking debate. Was it transformative use or outright theft of existing creative work? This incident was indicative of emerging copyright challenges related to gen AI, which have intensified since then. These challenges fall into two broad areas: issues around AI training data, and issues around the AI-generated outputs themselves.
1. Copyright and AI Training Data
At the heart of copyright issues with AI training data is “fair use.” In U.S. law, fair use allows borrowing from someone else’s work under certain conditions, such as research, teaching, or even parody. Legally, this is evaluated case by case, considering purpose, nature of the work, amount used, and market impact on the original. While judging fair use for individual cases is often straightforward, when companies train AI models at scale, things get murkier. Using millions of books, songs, or artworks to train AI models raises legal questions about the boundary between fair use and copying. AI companies argue it’s similar to Google Books, which shows snippets from millions of books. However, unlike Google Books, which aids discovery and drives sales of the original work, AI models can produce content that substitutes for the original, often without compensating the creator.
In a landmark case, The New York Times has sued OpenAI and Microsoft, arguing that their models were trained on millions of its articles without permission, and that ChatGPT can reproduce NYT content in ways that compete directly with its journalism. Early rulings from other similar cases show a mixed picture: training on legitimately purchased content may qualify as fair use, but using pirated copies or generating near-verbatim outputs crosses into infringement.
One emerging solution is licensing, where creators decide how, when, and where their content is used, creating revenue opportunities while enabling AI innovation. For example, OpenAI has signed licensing agreements with publishers like The Wall Street Journal and The Times of London, giving the company access to high-quality content for training while ensuring creators are compensated. Another is, immediately monetising content that LLMs try to access on websites, such as being enabled by Cloudflare.
2. Copyright and Output Imitation by AI
Beyond training data issues, AI can also pose copyright and ethical concerns if its outputs mimic a creator’s unique style or voice. Even when training data is legally obtained, if AI can spit out a novel in a bestselling author’s style or a track in Taylor Swift’s voice, it moves from inspiration to straight-up appropriation. For instance, Getty Images has sued Stability AI, developer of the Stable Diffusion model, for not only using copyrighted data for training the model, but also because some AI-generated images closely resemble copyright-protected content, and even included the Getty watermark.
While the Getty Images case is an example of demonstrable output infringement (the watermarks), for individual artists, the issue is often more subtle and deeply personal. AI that reproduces an artist’s style can undermine years of skill, talent, and creative identity, putting at risk both market opportunities and the integrity of their craft. It also raises broader questions about what counts as original art when a creator’s style can be replicated without consent or compensation.
What this means for professionals
From a business perspective, when AI is used to accelerate research, draft reports, generate creative briefs etc., the copyright risk is low since outputs are usually original content based on proprietary company data. The issue comes when AI is used for creative work, especially for public consumption (e.g. brand campaigns, social media content, ad jingles), where outputs can come too close to copyrighted material. In such scenarios, the risk is not just legal exposure but also a flood of derivative content that drowns out originality.
The legal environment is unsettled and evolving rapidly. Until clearer rules emerge, the safest approach is to: (i) treat AI as a support for your own ideas and expertise, (ii) check outputs for originality before publishing (plagiarism or similarity tools can help), (iii) give credit when appropriate, and (iv) monitor licensing arrangements that protect both creators and users. The future hopefully lies in balance, where technology amplifies human creativity without replacing it.
We plan to make this a series on AI & Ethics, and will cover another area related to it in an upcoming issue.
AI in Practice
How to Use Microsoft Copilot for Everyday Business Tasks
What is Copilot and why use it instead of ChatGPT or Gemini?
Copilot is Microsoft’s built-in AI assistant for Microsoft 365. Unlike general chatbots such as ChatGPT or Google Gemini, Copilot works directly inside Teams, Outlook, Word, Excel, Powerpoint and other Microsoft apps — using your calendar, chats, and documents while staying within your company’s security framework. The advantage is less about what it can do and more about where: in the flow of your daily work.
Before we go further, it is important to note two things:
Microsoft 365 Copilot is available to business or enterprise users with a Microsoft 365 license with the Copilot add-on.
Copilot is currently more widely available through Microsoft Teams and Outlook than in apps like Word or Excel, because rollout to those apps is still happening in phases.
To keep this simple, we will focus on how to use Copilot in Teams. And to make it practical, we will use one running example: preparing for a sales review meeting with a client.
1. Drafting business emails
In Teams, open Copilot (Chat) and type: “Draft an email confirming tomorrow’s sales review meeting with the client.” You will get a draft instantly.
To make sure it sounds like you and not “AI-written,” ask Copilot to “rewrite this in my usual email style — polite but direct” or provide a sample of a past email for reference.
2. Editing emails for context
Paste an existing draft into Copilot and ask: “Rewrite this email so it is concise, formal, and suitable for a senior client.” or “Edit this so it’s client-friendly but maintain a factual and objective tone.”
This way you control the tone while Copilot does the heavy lifting.
3. Generate talking point / content for the deck
Ask Copilot to get information about the client to include in the presentation. For instance, “Provide key insights from the last three meetings about Client XYZ that I can use as talking points for the review meeting”.
You could also give examples of talking points you want so that Copilot knows the kind of insights you’re looking for.
4. Checking documents for quality
Upload your sales review deck or Word document into Teams chat with Copilot.
Ask: “Check this for grammar, spelling, and consistency, and make it sound more professional.”
Copilot will suggest changes tagging specific slides / pages which you can choose to incorporate either selectively or fully.
5. Building meeting agendas
In Teams, open the meeting invite and click Copilot.
Type: “Create an agenda for tomorrow’s sales review with the client using the shared deck. The call will be for 30 minutes and will focus on highlighting key takeaways and urgent issues”
Copilot will draft a clear agenda with topics and timings, which you can refine and share.
What Copilot Cannot Do
Send emails directly. Outlook is still needed for that.
Understand unwritten context such as client sensitivities or politics.
Fact-check numbers or validate business content.
Replace your personal style completely. It can mimic, but you need to guide it.
That brings us to the end of Issue 10. As always, write in with your thoughts, comments, and feedback. Also, which AI tool is your go-to? And why?
Cheers!


Thank you but tooo long! Consider reducing the post size by 50%?