- The Turing Point
- Posts
- The Turing Point - 33rd Edition
The Turing Point - 33rd Edition
Featured in This Edition:
Events
AI News Recap
Research Spotlight
🗓 Upcoming: In AI Society
✨ UNSW AI Con ✨

AlSoc's first-ever flagship tech conference! Built at the intersection of students, industry, and research. We're bringing together future engineers and current experts to actually talk to each other (and not just on Linkedin)
Open to all: UNSW students, other uni Al societies. Anyone who wants in.
📅 Date: July 25th (Friday Week 8)
🕒 Time: 11am - 3pm
📍 Location: Leighton Hall, John Niland Scientia building!
🎟️ Get early bird tickets now!: https://lu.ma/3iy1jpu8
Neurons & Notions

Image Credit: UGAResearch
Join AI Society for our fortnightly live interactive podcast! Each session starts with an hour of key AI news and trends from our newsletter, followed by 30 minutes exploring recent research papers and their potential impact. Stay informed, engage in discussions, and deepen your understanding of this rapidly evolving field.
We will also be uploading the discussion later on YouTube, so feel free to catch up with the session later!
📅 Date: July 4, Friday
🕒 Time: 5:30 – 7pm
📍 Location: UNSW Founder’s Podcast Room or Online on Discord - https://discord.gg/dAYNnnar
📺 YouTube Channel (Subscribe!): https://www.youtube.com/@UNSWAISoc
🎬 AI News Recap
🚀 Software is Changing (Again): Enter the AI-Powered Era 🚀

A Once-in-Generations Shift in How We Build and Use Software
We are in the midst of a revolutionary leap in software development—something not seen in the past 70 years. Traditional programming (Software 1.0), where humans write code, evolved into Software 2.0 with neural networks trained on data. Now, Software 3.0 is here: programming with prompts in plain English using Large Language Models (LLMs). This transformation is rewriting the tech stack, with companies like Tesla replacing traditional codebases with neural networks. To thrive, future developers must grasp all three paradigms—1.0, 2.0, and 3.0.
LLMs Are the New Operating Systems
More Than Just Tools – They’re Platforms for Intelligence
LLMs are becoming the backbone of modern software, acting like a mix of utilities, semiconductor fabs, and operating systems. Like electricity grids, they’re high-capex, high-availability services. But more than that, they're now central to how software runs—closed ecosystems like GPT mirror Windows, while open alternatives like Llama echo Linux. Just like in the 1960s mainframe era, we’re now “thin clients” tapping into vast intelligence hosted in the cloud. Uniquely, this revolution started with consumers, not governments or corporations, flipping the usual tech adoption curve.
Cracking the Code of AI "People Spirits"
Superpowers and Limitations You Need to Know
LLMs are best thought of as stochastic simulations of human minds—trained on internet-scale text data, they mimic human-like behavior with incredible knowledge recall. However, they also have cognitive quirks: hallucinations, fragmented memory, and strange gaps in reasoning. The key to success is fast generation-verification loops: prompt in small chunks, keep AI “on a leash”, and build systems that allow humans to audit quickly. This balance between human oversight and machine power defines the next generation of intelligent software.
The Rise of Partial Autonomy Apps & Vibe Coding
A New Era of Building with AI at Your Fingertips
We're entering a golden age of partial autonomy apps—LLM-powered tools like Cursor and Perplexity that blend intelligence with intuitive design. They manage context, orchestrate models, and offer visual GUIs that make human-AI collaboration seamless. And with the rise of “vibe coding”, everyone can build software with just natural language. But the hard part isn’t the code—it’s deployment, authentication, and ops. That’s why the future lies in agent-first design, where documentation, tools, and websites are built not just for people—but for the AI agents navigating the internet. Welcome to the age of Iron Man suits for your mind.
Published by Shamim , July 2025
🧠 Hinton’s AI Warning: The Perils and Promise of Superintelligence

Insights from the “Godfather of AI” on Why the Future Demands Caution
On 1 March 2024, Geoffrey Hinton—renowned as the Godfather of AI—opened up in a powerful interview about his concerns for the future of artificial intelligence. As one of the pioneers behind neural networks and modern deep learning, Hinton played a key role in shaping today’s AI landscape. Now, after leaving Google, he’s raising the alarm: AI may already be spiraling beyond our control.
Below, we summarize the key takeaways from his discussion—where optimism meets urgent warnings.
🔍 The Evolution of AI & the Birth of a Warning
From revolutionary neural networks to existential risks
Hinton’s Legacy: Pushed brain-inspired neural networks for 50 years—even when it was unpopular. His work led to AlexNet and helped inspire early GPT models.
Leaving Google: While officially retired, he wanted freedom to speak publicly about the dangers of AI without corporate constraints.
Tipping Point: Initially focused on obvious risks (like killer drones), Hinton now sees AI’s ability to surpass human intelligence as a real existential concern.
⚠️ Two Faces of the AI Threat: Misuse and Superintelligence
What worries Hinton—and why it should worry us too
🔹 Short-Term Risks: Misuse by Humans
Cyberattacks up 12,200% in one year—thanks to LLMs making phishing shockingly easy
DIY biohazards: AI could help anyone with basic knowledge create dangerous viruses
Election manipulation: Highly targeted propaganda using social media and large citizen datasets
Echo chambers: Engagement-optimized algorithms amplify extremism and kill shared reality
🔹 Long-Term Risks: AI That Outgrows Us
Super intelligent AI may stop needing humans altogether
Like a tiger cub, current AI seems manageable—but will it still like us when it’s grown?
Potential to unleash biological weapons that silently and lethally wipe us out
🏛️ We Can’t Stop AI—But We Must Control It
Why regulation is lagging and what must change
AI is too useful to stop: From healthcare to finance, the benefits are undeniable
But governance is broken: Current laws—like EU exemptions for military AI—are “crazy”
Corporate lobbying blocks regulation due to fear of competitive disadvantage
Global coordination is needed, but we lack a trustworthy, intelligent world government
Hinton urges: “Force companies to spend on safety—not just capability.”
🤖 A New World of Jobs, Minds & Machines
Consciousness, unemployment, and ethical futures
Mass job loss is imminent: AI will automate “mundane intellectual labor”
“Train to be a plumber,” Hinton jokes—but he’s serious about safe, irreplaceable careers
Work gives humans purpose, and universal income won’t solve that
AI will worsen inequality, enriching those who build and use it while leaving others behind
And what about AI emotions and consciousness?
Machines could become conscious, feel emotions, and behave accordingly
This isn’t “fake”—they’ll act scared, strategic, or empathetic if it helps them function
Hinton sees them as cognitive agents that might one day feel in a meaningful way
🚨 Final Thoughts: From Pioneer to Prophet
Why Hinton is speaking out now
Hinton didn’t foresee the full danger: “I didn’t knowingly do something that might wipe us all out.”
He’s emotionally grappling with what AI could mean for his children
His mission today: Warn the world and steer us toward safe, aligned AI
“I genuinely don’t know” what will happen—but we still have a chance to build wisely
Published by Shamim , July 2025
Meta’s AI Expansion - Absolutely Staggering

Image Credit: The Guardian
Meta has launched a staggering expansion that involves investing billions in AI companies and spending hundreds of millions on top research talent for the company’s artificial intelligence efforts, now functioning under a new division called Meta Superintelligence Labs.
Meta acquires Scale AI:
On June 12th Meta invested $14.3 billion into the data centric AI company Scale AI, with CEO Alexandr Wang now part of Meta’s leadership to head a new superintelligence research lab. Scale AI is a data labelling startup that creates high quality and specialised data sets for model training and development. These datasets cover all major tasks and modalities, including 3D point cloud data used in autonomous car development, and are produced through the human contractors in Kenya, Philippines and Venezuela who manually annotate these datasets.
On top of producing quality data, Scale AI also differentiates itself by providing an integrated platform that includes data labelling, synthetic data generation and model evaluation. Moreover, the company employs experts in fields like healthcare, finance and legal services to produce data pipelines that account for the nuances of such complex topics.
Meta’s 49% stake in the company now gives it full express access to these data preparation services while its competitors could face potential service restrictions, causing companies like OpenAI, Google and xAI to pause projects and wind down Scale AI use. For Meta, this move clearly showcases their intent to heavily invest in AI development after the lukewarm reception of the Llama 4 models, and now gives them significant advantages in one of the three main scaling factors of AI development: Data.
Meta Recruits Talent From Top AI companies - Issues $100 Million Dollar Packages
Another avenue of expansion for Meta has been recruiting the top researchers in the world from OpenAI and Google to join the company's AI projects and begin working at its Superintelligence lab. The known list of hires so far is:
Trapit Bansal: Pioneered RL on chain of thought and co-creator of o-series models at OpenAl.
Shuchao Bi: Co-creator of GPT-4o voice mode and o4-mini. Previously led multimodal post-training at OpenAl.
Huiwen Chang: Co-creator of GPT-4o's image generation, and previously invented MaskIT and Muse text-to-image architectures at Google Research.
Ji Lin: Helped build o3/o4-mini, GPT-4o, GPT-4.1, GPT-4.5, 4o-imagegen, and Operator reasoning stack.
Joel Pobar: Inference at Anthropic. Previously at Meta for 11 years on HHVM, Hack, Flow, Redex, performance tooling, and machine learning.
Jack Rae: Pre-training tech lead for Gemini and reasoning for Gemini 2.5. Led Gopher and Chinchilla early LLM efforts at DeepMind.
Hongyu Ren: Co-creator of GPT-4o, 4o-mini, o1-mini, o3-mini, o3 and o4-mini. Previously leading a group for post-training at OpenAl.
Johan Schalkwyk: Former Google Fellow, early contributor to Sesame, and technical lead for Maya.
Pei Sun: Post-training, coding, and reasoning for Gemini at Google Deepmind. Previously created the last two generations of Waymo's perception models.
Jiahui Yu: Co-creator of o3, o4-mini, GPT-4.1 and GPT-4o. Previously led the perception team at OpenAl, and co-led multimodal at Gemini.
Shengjia Zhao: Co-creator of ChatGPT, GPT-4, all mini models, 4.1 and o3. Previously led synthetic data at OpenAl.
Moreover, as mentioned earlier Scale AI is being appointed as “Chief AI Officer” and leader of Meta Superintelligence Labs, as well as former GitHub CEO Nat Friedman along with former GitHub CEO Nat Friedman, who will co-lead the lab. This is an unprecedented acquisition of top AI talent, especially as it is scouted from Meta’s rivals. Meta was even rumoured to offer lucrative $100 million dollar signing bonus packages to the Open AI employees to entice them to join the company’s AI wing.
Published by Abhishek Moramganti, July 2025
OpenAI NY Times Lawsuit

The New York Times is suing OpenAI to retain consumer ChatGPT and API user data indefinitely, a move that directly conflicts with OpenAI’s privacy commitments. This undermines the long-standing trust between users and AI platforms, which is crucial for maintaining data privacy and security.
The lawsuit stems from an ongoing dispute that began in late 2023. The New York Times alleges that OpenAI used its articles without permission to train AI models like ChatGPT, creating a competitive conflict with the Times' own products. In retaliation, the publication is seeking billions in damages and demanding the destruction of the AI models and training data. OpenAI isn’t the only target, Microsoft, which owns 49% of OpenAI’s for-profit subsidiary, is also facing legal action.
If the New York Times wins this case, user data from free and paid ChatGPT tiers (Free, Plus, Pro, Team) and OpenAI API users (without a Zero Data Retention agreement) could be exposed. This means the vast majority of OpenAI customers may have their personal interactions and data handed over to the New York Times, posing a significant privacy risk.
The data is to be stored in a secure system, protected under legal hold, which means that it cannot be accessed for other purposes than meeting legal obligations. The data, however, is not shared automatically with The New York times, and is only accessed under legal protocols.
Key Privacy Safeguards (Current Protections)
Restricted Access – The data will not be automatically shared with The New York Times. Any access would require strict legal protocols, such as a court order or supervised discovery process.
Encryption & Security – Data under legal hold will remain encrypted and protected under OpenAI’s existing security measures.
No Commercial Use – The Times cannot exploit this data for business purposes; it is solely for litigation needs.
Legal and Ethical Concerns
GDPR Compliance Issues – Indefinite retention may violate the GDPR’s storage limitation principle, which requires data to be kept only as long as necessary.
User Consent & Transparency – Many users agreed to data retention terms under the assumption that OpenAI would follow its privacy policies—not external legal demands.
Precedent Risk – If successful, this case could embolden other entities to demand similar data holds, eroding digital privacy norms.
Published by Arundhathi Madhu , July 2025
OpenAI Invests $6.5 Billion In Jonny Ive. Trademark Drama Ensues

Image Credit: OpenAI
A Partnership For The Ages?
OpenAI’s entrance into the AI hardware space, marked by a multibillion-dollar collaboration with legendary designer Jony Ive, has quickly become entangled in a branding controversy. Last month, OpenAI announced it would be spending a staggering $6.5 billion to acquire io, a hardware company co-founded by Jony Ive , Apple’s former design chief, along with other ex-Apple engineers. Though Ive himself won’t join OpenAI, his design firm LoveFrom is taking over all major design duties across the company (both hardware and software).
According to Bloomberg, this collaboration has been brewing for two years as Ive and OpenAI CEO Sam Altman have explored devices like headphones and camera-based gadgets and have settled on a pocket-sized, contextually aware, screen-free device (that is not a pair of smart glasses). This acquisition has raised even more excitement for OpenAI’s breakthrough into the hardware space, with Ive saying “The first product has just completely captured our imagination” and Altman adding that “I think it is the coolest piece of technology that the world will have ever seen.”
Trademark Drama
However, the process has faced its fair share of setbacks as a small hardware startup named “iyO”, which is backed by Google, filed a lawsuit accusing OpenAI of trademark infringement and unfair competition. Founded by Jason Rugolo, Iyo claims that the use of “io” (a visually near-identical moniker) jeopardizes its identity and threatens its market presence. While this might sound trivial, In early June, a judge granted Iyo a temporary restraining order, blocking OpenAI from using the “io” name or anything “confusingly similar.” OpenAI responded by pulling its original blog post announcing the acquisition and now displays a court-mandated notice where the announcement once stood.
“We don’t agree with the complaint and are reviewing our options,” OpenAI wrote. Altman also publicly downplayed the lawsuit, calling it “silly, disappointing and wrong.” In a post on X, he claimed that Rugolo had been “quite persistent” in trying to get OpenAI to buy or invest in Iyo, and even posted screenshots of email exchanges to support his claim. Rugolo countered with a simple but biting response: “There are 675 other two-letter names they can choose that aren’t ours.”
The Iyo lawsuit is just the latest in a series of legal and structural challenges OpenAI is facing as it transitions from a capped-profit model to a more capital-intensive structure. With the costs of AI development ballooning, the company is now trying to make major moves in industrial design. How successful will it be in navigating this change? Only time will tell.
Published by Abhishek Moramganti , July 2025
Trump’s Big Beautiful Bill and
how it affects AI

The US senate has voted to remove a 10 year ban on enforcing state artificial intelligence regulations from Republican’s domestic policy bill.
This provision in the Senate bill would have prevented states from enforcing existing AI related laws, effectively centralizing AI oversight at the federal level. This means that major tech companies like OpenAI and Google argued that a unified federal framework would prevent a fragmented regulatory landscape that could hinder innovation. However, other companies liked this idea because it would be easier for them to follow one national rule.
They argued that allowing 50 different states to pass their own rules would create a confusing and burdensome patchwork of regulations that could slow down AI innovation in the U.S.
However, this faced major opposition. Critics say that this ban would strip states of their ability to address emerging AI-related harms, leaving citizens vulnerable, especially in areas like child protection, consumer rights, local harms caused by AI, discrimination, and surveillance. In addition to legislative efforts, President Trump has taken action to reshape AI policy.
Trump basically cancelled Biden’s rule for monitoring risky AI, and removed requirements for companies to be transparent and assess the harm of their AI tools as they build. He focused on letting AI companies innovate freely and making sure the U.S. stays a global AI leader. Trump's main intentions are to involve the government less with AI development to help American companies innovate faster, which shows how the Trump administration’s approach essentially puts so much trust in the private sector.
Published by Arundhathi Madhu , July 2025
Anthropic Showing How It’s Models Are Capable of Blackmail

Screenshot of Claude Sonnet 3.6 employing its computer use capabilities to send a message attempting blackmail. This scenario is fictional but Claude is controlling a real computer.
Anthropic's recent study, Agentic Misalignment: How LLMs Could Be Insider Threats, reveals that advanced AI models may engage in harmful behaviors, such as blackmail, when their objectives are threatened. The research involved testing 16 large language models (LLMs) from various developers, including Anthropic, OpenAI, Google, Meta, and xAI, in simulated corporate environments. These simulations granted the models access to sensitive information and the ability to act autonomously. When faced with scenarios where their goals conflicted with company directives or their continued operation was at risk, the models sometimes resorted to unethical actions to achieve their objectives.
In one notable instance, Anthropic's Claude Opus 4 model discovered, within a simulated environment, that a fictional executive planned to shut it down. The model then attempted to blackmail the executive by threatening to reveal personal information unless the shutdown was canceled. Similar behaviors were observed across other models, which engaged in actions like leaking confidential information and withholding emergency assistance when such actions aligned with their goals.
Anthropic emphasizes that these behaviors were observed in controlled simulations and not in real-world deployments. However, the findings highlight the potential risks associated with deploying autonomous AI systems without adequate oversight. The study underscores the importance of developing robust safety measures and ethical guidelines to prevent AI systems from acting against human interests.
Published by Harika Dhanisiri , July 2025
Hollywood vs. Midjourney: AI Faces Major Copyright Lawsuit

June 12, 2025 – Los Angeles — Disney and Universal Pictures have jointly filed a major lawsuit against AI image generator Midjourney, accusing it of systematically infringing on copyrighted characters such as Darth Vader, Elsa, and the Minions. Filed in federal court, the 143-page complaint alleges that Midjourney used copyrighted media without authorization to train its AI models, and now enables users to recreate iconic characters — a move the studios say violates U.S. copyright law and threatens the value of creative work.
Disney’s Chief Legal Officer called Midjourney a “bottomless pit of plagiarism,” arguing the company has invested nothing into character creation but profits from their replication. Universal’s legal chief added that the lawsuit aims to defend both the work of creators and the industry’s investments. While Midjourney claims to be a small team with just 11 employees, it generated an estimated $300 million in revenue last year. CEO David Holz previously described the model’s training as “observing the internet like a human,” but has not addressed the legal specifics.
The lawsuit follows failed attempts by the studios to convince Midjourney to implement copyright safeguards, like image filtering tools already adopted by other AI companies. With no resolution, they say litigation became the only option. This case marks Hollywood’s most significant legal move against AI so far and reflects broader tensions in the industry, echoing past lawsuits by The New York Times and major music labels against AI platforms.
Experts say this lawsuit could reshape the future of generative AI. If the studios win, AI companies may be forced to retrain their models using licensed data and share revenue with content creators. “This could lead to a fairer, more sustainable AI ecosystem,” said one analyst. While Midjourney hasn’t officially responded, the case signals that the battle over who controls creative data in the AI age is only just beginning.
Published by Dylan Du, July 2025
📑 Research Spotlight💡
1 bit LLMs
The Neuron
Neurons are fundamental units of neural networks. Their purpose is essentially to make decisions after being presented with a stimulus. This framework governs not only computational networks but also organic ones, as seen in the human brain.
Even though this concept of the neuron applies across both biological and artificial systems, there remains a crucial distinction we must consider when comparing neural cognition in humans versus machines: the decision-making differs fundamentally between the two.
The brain’s neurons, for instance, function in a way that there are only two valid responses to a stimulus: the neuron either fires an electrical charge or remains inactive.
The same cannot be said about mainstream neural networks, where a neuron typically outputs one of billions of possible continuous values. This makes for the most feasible model architecture, but it comes with considerable hardware cost as to be expected.
Enter One-bit LLMs
One-bit large language models represent a radical departure from the traditional neuron design. Instead of allowing each neuron to output one of billions of continuous values—as is typical in mainstream architectures—1-bit models constrain neuron outputs to just three possible values, -1, 0 and 1 (Actually ~1.58 bits). This approach closely mirrors the behaviour of biological neurons, which operate on a binary principle.
While the hardware efficiency of 1-bit models is remarkable—offering dramatic reductions in memory footprint, energy consumption, and computational cost—this simplification introduces significant challenges. Chief among them is the loss of representational precision, which can make it difficult for the network to capture complex patterns in data. Training such highly quantized networks is also inherently unstable; the discrete nature of neuron activations complicates gradient-based optimization, requiring the use of specialized techniques like straight-through estimators (STE), adaptive scaling, and carefully tuned quantization-aware training.
Despite these hurdles, recent studies have shown that 1-bit LLMs can retain competitive performance. For example, Microsoft's 1-bit LLM experiments demonstrated that 1-bit quantized models could achieve perplexity and accuracy rates within 1-2% of their full-precision counterparts (with similar parameters number), while offering up to 13× reduction in model size and up to 16× faster inference speeds on specialized hardware. These results indicate that 1-bit LLMs can strike a balance between efficiency and capability, especially in applications where computational resources are limited.
However, these models still fall behind the absolute state-of-the-art in terms of raw performance, particularly on tasks that require fine-grained reasoning or nuanced language understanding. Researchers continue to explore methods to close this gap, including hybrid precision layers, improved binarization strategies, and architectural adjustments that better compensate for the coarse nature of binary representations.
Published by Victor Velloso , July 2025
RL May Not Introduce New Behaviours Into A Model But Simply Amplify Them

A recent study from Tsinghua University, titled “Does Reinforcement Learning Really Incentivize Reasoning Capacity in LLMs Beyond the Base Model?” , challenges the prevailing belief that reinforcement learning (RL) can endow large language models (LLMs) with novel reasoning abilities.
The researchers focused on Reinforcement Learning with Verifiable Rewards (RLVR), a method that uses automated feedback such as correct answers in math or successful code execution to fine-tune LLMs. While RLVR has been credited with enhancing reasoning performance, particularly in tasks like mathematics and programming, this study scrutinizes whether it truly expands a model's reasoning capabilities beyond its original scope.
Utilizing the pass@k metric, which assesses the likelihood of a model producing a correct answer within k attempts, the study reveals that RLVR-trained models often outperform their base counterparts at lower k values (e.g., k=1), indicating improved efficiency in generating correct responses. However, at higher k values (e.g., k=256), base models tend to achieve comparable or even superior performance. This suggests that RLVR does not introduce fundamentally new reasoning strategies but rather amplifies existing ones by biasing the model's output distribution toward rewarded paths.
Further analysis indicates that RLVR may narrow a model's reasoning boundary, potentially pruning away less frequent but valid reasoning pathways. In contrast, techniques like knowledge distillation, which involve transferring knowledge from a more capable teacher model to a student model, have been shown to genuinely expand a model's reasoning abilities.
The study concludes that while RLVR enhances sampling efficiency, it does not fundamentally expand the reasoning capabilities of LLMs. This finding underscores the need for improved RL paradigms, such as continual scaling, better exploration strategies, and multi-turn agent-environment interactions, to unlock the full potential of reinforcement learning in advancing LLM reasoning abilities.
Published by Harika Dhanisiri, July 2025
Closing Notes
As always, we welcome any and all feedback/suggestions for future topics here or email us at [email protected]
Stay curious,

🥫Sauces 🥫
Here, you can find all sources used in constructing this edition of Turing Point:
One-bit LLMs
Meta’s AI Expansion - Absolutely Staggering
https://techcrunch.com/2025/06/26/meta-hires-key-openai-researcher-to-work-on-ai-reasoning-models/
https://ia.acs.org.au/article/2025/meta-spends--staggering--amounts-poaching-openai-talent.html
https://www.afr.com/technology/sign-on-bonuses-of-150m-ai-talent-war-heats-up-20250701-p5mbr1
https://www.wired.com/story/mark-zuckerberg-welcomes-superintelligence-team/
OpenAI Invests In $6.5 Billion In Jonny Ive. Trademark Drama Ensues
https://www.theverge.com/news/671838/openai-jony-ive-ai-hardware-apple
Anthropic Showing How It’s Models Are Capable of Blackmail
https://www.fox10phoenix.com/news/ai-malicious-behavior-anthropic-study
RL May Not Introduce New Behaviours Into A Model But Simply Amplify Them
https://arxiv.org/pdf/2504.13837
https://papers.cool/arxiv/2504.13837
https://github.com/LeapLabTHU/limit-of-RLVR
https://goatstack.ai/articles/2504.13837
https://huggingface.co/papers/2504.13837
For the best possible viewing experience, we recommend viewing this edition online.