Connect with us

Google

Google Reading Mode app expanding its support to more social media and email apps

Published

on

Google Reading mode new update

In 2020, Google released its Reading Mode app, and now the company has made the Reading Mode app plenty useful. With this new update, the Reading Mode app will also work in Gmail, X (formerly Twitter), and Threads along with other social media and email apps as well.

If we talk about what this app can do, then this app makes every page text easy, so that you can easily read that along with this it also turns each viewed page into plain text. As mentioned above, this new update brings Reading Mode app support to Gmail, X, Threads, and other email and social media apps which is a good and useful thing.

Furthermore, if you are on the go, then the app can also read the text aloud. So, that you can listen to it. Still, the developers have to note that the app may not work properly with all social media and email apps. However, it might work with all social media and email apps in the future.

Do you guys use this Reading Mode app? Do let us know in the comment box. Also, do tell us whether you like this post or not.

Via

Kshitiz Jangra is currently pursuing a Bachelor in Computer Applications (BCA), he likes coding languages such as Python, C, and Java. He also has an interest in Smartphones and gadgets in the tech industry. Kshitiz likes to eat junk food from popular places in India.

Google

Google NotebookLM: AI-Powered Note-Taking Revolution with Gemini Integration

Published

on

Google NotebookLM: AI-Powered Note-Taking Revolution with Gemini Integration

Google NotebookLM is Google’s latest leap in the field of AI-enhanced productivity tools, designed to help users make smarter notes and process information faster. Originally introduced as Project Tailwind at Google I/O 2023, the platform has since evolved into a robust and intelligent research assistant under the NotebookLM brand. Powered by Google’s Gemini language model, it aims to transform the way users study, write, analyze, and synthesize information.

As of 2025, Google has expanded NotebookLM’s capabilities, making it far more than just a basic note app. It now serves as a personalized AI research assistant, helping users extract insights, build summaries, and even generate podcast-style audio breakdowns — all from their own uploaded content.

What Makes Google NotebookLM Stand Out

NotebookLM is not just another note-taking app. What sets it apart is its deep integration with AI and its ability to generate responses and summaries based on user-provided source materials. Users can upload documents from Google Docs, PDFs, websites, or Google Slides, and the app creates structured summaries, explanations, and Q&A outputs derived specifically from that content.

Here are the standout features that make NotebookLM unique:

  • Document Integration: Upload your own study materials, client documents, or reports. The AI processes it all and becomes context-aware of your material.

  • Citation & Source Accuracy: When generating summaries or answers, NotebookLM includes inline citations so you can trace every piece of information back to its original document. This ensures clarity and transparency — crucial for academic and professional users.

  • Audio Overviews: One of the most innovative features is the ability to produce podcast-style summaries. Users can turn complex documents into audio recaps featuring AI-generated hosts who explain the content conversationally — ideal for auditory learners.

  • Powered by Gemini 1.5 Pro: This allows NotebookLM to handle massive context windows — it can process much larger volumes of text than traditional LLMs, offering deeper comprehension and more nuanced summaries.

    Availability and Expansion Plans

    Initially available only in the U.S., Google has now opened up NotebookLM to a broader audience and removed the “experimental” label. It has also introduced NotebookLM Plus, a paid tier aimed at professionals and enterprise users who need enhanced tools and greater document handling capabilities.

    The standard version remains free, making it accessible to students, researchers, and writers who want to test its features without upfront costs.

    Real-World Use Cases

    1. Academic Research: Students can upload class notes, research papers, and textbook PDFs to quickly generate study guides or ask context-specific questions. NotebookLM becomes a 24/7 AI tutor.

    2. Content Creation: Bloggers, journalists, and scriptwriters can use it to turn research documents into outlines, summaries, or even audio recaps. This streamlines pre-production workflows and helps beat creative blocks.

    3. Business and Legal Use: Professionals handling policy documents, financial reports, or legal contracts can benefit from NotebookLM’s ability to highlight, summarize, and pull insights quickly.

    4. Educational Podcasting: Teachers and YouTubers can convert raw content into audio-overview scripts — a new way to deliver lessons.

    Also Read: Xiaomi XRING Processor: A Bold Leap Toward Chip Independence in 2025

    User-Friendly and Transparent AI

    Google has designed NotebookLM with user trust in mind. It always cites sources and avoids “hallucinating” facts not present in the uploaded material. This increases reliability, especially for users relying on it for academic and professional outputs.

    Moreover, the clean interface and chat-style interaction make it easy to query your content, like asking: “Summarize this report in 5 bullet points” or “What are the key takeaways from chapter 2?”

    Conclusion

    Google NotebookLM is more than just a productivity tool — it’s a glimpse into how artificial intelligence can revolutionize knowledge management.
    From personalized research summaries to podcast-style learning, NotebookLM is setting a new standard in the world of AI-powered note-taking. With Gemini at its core and real-world usability in its design, it’s poised to become the go-to assistant for students, creators, educators, and professionals.

    As the tool continues to evolve and roll out to more users, NotebookLM could very well redefine how we consume, organize, and act upon information in our daily lives.

Continue Reading

Google

Google Android XR Glasses Unveiled at TED 2025 — The Future of Wearable AI Is Here

Published

on

Google Android XR Glasses image credit: The outpost.AI

Google Android XR Glasses made a surprise appearance at the TED 2025 conference in Vancouver, and they’ve already sparked massive interest in the future of wearable AI. Showcased during a live demo by Shahram Izadi, the head of Google’s AR and VR division, these smart glasses blend augmented reality, AI, and minimalistic design in one sleek package.

While they are still in the prototype phase, the Android XR glasses offer a compelling glimpse into how we may interact with the world around us in the near future—hands-free, screen-free, and powered by real-time artificial intelligence.

What Are Google Android XR Glasses?

The Google Android XR Glasses are a lightweight pair of smart glasses powered by Google’s own Gemini AI and a new operating system called Android XR. Unlike traditional AR headsets, these glasses don’t come with heavy onboard processing. Instead, they offload computing to a paired smartphone, which handles the heavy AI lifting.

During the TED 2025 demo, the glasses successfully:

  • Translated Farsi to English in real time
  • Scanned the contents of a book
  • Displayed speech notes directly in Shahram Izadi’s field of vision

These features were achieved through seamless AI integration, making the experience look more like magic than tech

How Do They Work?

At their core, Google’s Android XR Glasses use:

  • A built-in camera
  • In-lens transparent display
  • Microphone & bone-conduction speakers
  • Gemini AI via smartphone pairing

This architecture ensures that the glasses are light enough to wear like normal eyewear while still offering real-time computing, smart prompts, and visual overlays. It’s a Google Assistant on steroids, but now directly in your vision, instead of on your phone.

The AI can understand what you’re seeing, hearing, or saying, and respond contextually—whether it’s translating text, identifying objects, reading your calendar, or helping you navigate using real-time directions overlaid on the street.

Also Read: SpaceX’s Historic Polar Orbit Mission and Its Impact on Space Exploration

Integration with the Google Ecosystem

The Android XR platform supports many existing Google services. Users can expect access to:

  • Google Maps for AR navigation
  • YouTube Music for gesture-based playback
  • Google Lens features for live translation, image search, and more

The aim is to create a device that doesn’t distract you from the real world but enhances it, letting you do more with less screen time.

Google + Samsung: XR Revolution in the Making

While Google hasn’t confirmed if these glasses will go commercial soon, one thing is clear—they’re not working alone. Google has partnered with Samsung, which is reportedly developing:

  • Smart glasses under the codename “Haean”
  • A mixed reality headset called “Project Moohan”

Both are expected to run on Android XR. This collaboration signals a serious move into spatial computing, with Android XR poised to become the platform equivalent of Android for the AR era.

Challenges to Overcome

Despite the impressive TED demo, Google faces several real-world challenges before Android XR glasses become mainstream:

  • Battery life: Can they last all day on a single charge?
  • Privacy concerns: Always-on cameras and microphones can raise questions
  • Pricing: Advanced wearable tech doesn’t come cheap
  • Mass production: Scaling lightweight AR wearables isn’t easy

Yet, if any company has the ecosystem and technical chops to pull this off, it’s Google—especially with Samsung backing the hardware side.

Final Thoughts: Is This the Next Big Thing?

The Google Android XR Glasses may not have a release date yet, but their first real-world appearance shows that wearable AI is not some distant dream—it’s already being tested. The combination of Gemini AI, sleek design, and AR functionality could mark the biggest leap in tech since smartphones.

While it may take another year or two for consumer-ready versions, this TED 2025 preview confirms one thing: Google’s vision for spatial computing is real, ambitious, and closer than we think.

Continue Reading

Google

Google is Developing Gemini for Headphones

Published

on

gemini is coming in india next week

Google has recently renamed its AI Assistant to Gemini previously Bard. The Gemini app is already available in the U.S., Canada, Asia Pacific, Latin America, and Africa. This app will replace your Google Assistant as well in both Android and iOS.

Note: The Gemini app is expected to be launched in India by next week.

Some reports also suggested that Gemini’s mobile app is working on expanding availability to make it accessible on your headphones very soon. 9to5Google has recently appraised the beta version of the Google app (15.6) that contained the message which is listed below:

Gemini mobile app is working on expanding availability to make it accessible on your headphones

Few headphones have a button or a gesture from which we can activate voice assistant. Currently, it is Google Assistant, even if Gemini replaces it on the smartphone. For example, Google’s Pixel Buds Pro till now working with the old service, as indicated by its voice as well as capabilities, as per source.

Primarily, Google is focusing on launching the Gemini AI app for smartphones in all regions, especially in Europe. When Gemini expands to audio wearables, then users can expect features such as customizing the playback speed and shorter answers.

What do you guys think about the Gemini phone app? Do let us know in the comment box. Also, do let us whether you like this post or not.

Via

Featured Image from piunikaweb.com

Continue Reading