Connect with us

Apple

These Software Features are implemented by Apple

Published

on

software features article featured image

According to Apple Newsroom, Apple is going to introduce new software features for Live Speech, Personal Voice, and Point and Speak in Magnifier, cognitive accessibility. On 15 May 2023, Apple previews these features.

The updates mentioned above can include on-device machine learning to secure the privacy of the users. Apple is collaborating with community groups to represent a broad spectrum of users. Tim Cook – CEO of Apple said that with these features everyone can create, communicate and do what they love.

“At Apple, we’ve always believed that the best technology is technology built for everyone”  -Tim Cook, CEO Apple

“Accessibility is part of everything we do at Apple” – Sarah Herrlinger, Apple’s senior director of Global Accessibility Policy and Initiatives

Use of Live Speech and Personal Voice Advance Speech Accessibility

Live Speech is mainly designed for those users, who are unable to speak, those who have lost their speech over time, and those who are at risk of losing the ability to speak. With the help of this feature, users can type what they want to say to have it to be spoken loud during calls such as FaceTime Calls and Normal Phone Calls on their iPhones, iPads, and Macs. With this, users can also save commonly used phrases. So that these can be used quickly during lively conversations with family, and friends.

Personal voice is the secure and simplest way to create a voice that is similar to their voices. Users can create a Personal Voice by reading and with a set of random text prompts. With this users can record a 15-minute audio on their iPhones, and iPads.

Apple accessibility Personal Voice

Point and Speak for blind and low-vision users

New Detection Mode is added to Magnifier for the blind and low-vision users. this feature the blind and low vision features can interact with physical objects that have several text labels.

How does it work?

The Point and Speak combines the input from Camera in the LiDAR Scanner which is present in the module of the Front Camera of the iPhone. After that on-device machine Learning will announce the text on each button whenever the user moves the finger across the keypad.  Point and Speak is built into the Magnifier app on iPhones and iPads which works great with VoiceOver and can be used with the other magnifier features such as  Door Detection, People Detection, and Image Descriptions that help users to navigate their physical environment.

LiDAR Scanner in iPhone

Assistive Access for those users who have Cognitive Disabilities

Apple accessibility Assistive Access in iPad and iPhone 14 Pro Max Home Screen

The Assistive Access feature includes a customized experience for Phone Calls such as Normal Calls and FaceTime Calls, Camera, Music, and Messages. This feature offers a well-defined interface with high-contrast buttons and large text labels. It also has tools that enhanced the experience for the individual they support. For Example, for a user who used to communicate visually, the Messages app includes an emoji-only keyboard. it also has the option to record a video message and after that, you can share that message with your loved ones. The users can also choose between a more visual, grid-based layout for their Home Screen and app and a row-based layout for those users who prefer text.

Some Additional Features
  • Users that are Deaf or have difficulty in hearing can pair Made for iPhone hearing devices directly to Mac and customize them for their hearing comfort.
  • Users with physical and motor disabilities who use Switch Control can turn any switch into a Virtual Game Controller. With this, they can play games on their iPhones and iPads.
  • Users who are sensitive to rapid animations can automatically pause images with moving elements, such as GIFs, in Messages, and Safari.
  • For those users who use VoiceOver, Siri voices sound natural and expressive even at high rates of speech feedback. Customized options are provided for the users. With these options users can customize the rate at which Siri speaks to them. From 0.8x to 2x.
  • For users with low vision, Text Size is now easier to adjust across Mac apps such as Finder, Messages, Mail, Calendar, and Notes.
  • Voice Control adds phonetic suggestions for text editing so users who type with their voice can choose the right word out of several that might sound alike, like “do,” “due,” and “dew.” Additionally, with Voice Control Guide, users can learn tips and tricks about using voice commands as an alternative to touch and typing across iPhones, iPads, and Macs.

Also Read:

If you like our article, follow us on Google News and Instagram, or join our Telegram Group.

Ashok Mor (also known as TechiBee) owns a YouTube channel named TechiBee. He has been providing various tips, tricks and latest tech videos in the world of smartphones.

AI

Gemini App on macOS: 5 Powerful Features That Make It a Game-Changer

Published

on

Gemini AI app icon on macOS featuring a colorful gradient star logo on a soft blue background

Gemini app on macOS is finally here

Gemini app on macOS has officially arrived, marking a significant step by Google in bringing its AI ecosystem closer to desktop users. With this launch, Mac users can now access Gemini without relying on browsers, making interactions faster and more seamless.

The introduction of a dedicated app signals Google’s intent to compete strongly in the growing desktop AI space. As artificial intelligence becomes an essential part of productivity workflows, having a native application simplifies how users interact with AI tools on a daily basis.

A more seamless and integrated AI experience

One of the biggest advantages of the Gemini app on macOS is accessibility. Users can launch the app instantly using keyboard shortcuts, eliminating the need to switch between tabs or open multiple windows. This improves workflow efficiency, especially for professionals who rely heavily on multitasking.

The app is designed to handle a wide range of tasks, including writing, brainstorming ideas, summarizing documents, and analyzing files. It also integrates smoothly with existing tools and services, allowing users to work across applications without interruption.

This level of integration transforms the Mac into a more capable productivity machine, where AI becomes a constant assistant rather than a separate tool.

What this means for the future of desktop AI

The launch of Gemini on macOS highlights a broader trend in the tech industry. Companies are moving beyond web-based AI solutions and focusing on dedicated desktop experiences. This shift is aimed at improving speed, usability, and deeper system integration.

For Google, this move places Gemini in direct competition with other AI platforms that already offer desktop-level access. It also reinforces the company’s commitment to making AI more accessible across devices and operating systems.

As AI continues to evolve, native applications like Gemini are expected to play a crucial role in shaping how users interact with technology. The macOS launch is likely just the beginning, with further enhancements and features expected in future updates.

Conclusion

Gemini app on macOS is more than just a new release. It represents a shift toward a more integrated and efficient AI experience for desktop users. By offering quick access, powerful features, and seamless workflows, Google is positioning Gemini as a key player in the next phase of personal computing.

Also Read: OxygenOS 16 April Update: No New Features but Important Changes You Should Know

Continue Reading

iOS

iOS 26.5 Beta 1: Apple Begins Testing New Features Ahead of Next iPhone Update

Published

on

iOS 26.5 update icon with pastel gradient background on iPhone interface

iOS 26.5 Beta 1 has officially been released by Apple Inc. for developers, just days after the public rollout of iOS 26.4. The latest beta update signals Apple’s continued focus on refining its ecosystem, with early groundwork for several features expected to evolve in upcoming releases.

What’s New in iOS 26.5 Beta 1

One of the key highlights in iOS 26.5 Beta 1 is the introduction of enhancements related to Rich Communication Services (RCS). Apple appears to be working on improving encryption standards for RCS messaging, which could significantly boost privacy and security for cross-platform conversations in the future.

Another notable development is the backend support for advertisements within Apple Maps. While not yet visible to users, this feature suggests that Apple may be preparing to introduce promoted listings or search ads within its navigation app, potentially opening a new revenue stream.

The update also includes improvements aimed at better compatibility with third-party wearable devices, particularly in regions like the European Union. This move aligns with regulatory requirements and indicates Apple’s gradual shift toward a more open ecosystem.

No Major Siri Upgrade Yet

Despite growing expectations around artificial intelligence advancements, iOS 26.5 Beta 1 does not introduce any major updates to Siri. Users anticipating significant AI-driven changes may need to wait for future updates or announcements later this year.

Early Impressions

As expected from an initial beta release, iOS 26.5 Beta 1 is primarily focused on under-the-hood improvements rather than user-facing features. Developers testing the update may uncover additional changes over time, but for now, the update appears to be a stepping stone for more substantial enhancements in upcoming beta versions.

For general users, it may be advisable to wait for a stable release, as early beta builds can include bugs and performance issues.

Source: Macrumors

Also Read: OnePlus Ace 6 Ultra vs Redmi K90 Ultra: Flagship Battle with 165Hz Display and Massive Battery

Continue Reading

iOS

iOS 27 Siri AI Upgrade: Apple to Add Third-Party Chatbot Support

Published

on

Man holding two iPhones, including an orange model, at an Apple product event surrounded by attendees

iOS 27 Siri AI Upgrade: Apple to Add Third-Party Chatbot Support

iOS 27 Siri AI upgrade is shaping up to be one of the most significant changes in Apple’s voice assistant strategy. According to recent reports, Apple is planning to introduce support for third-party AI chatbots within Siri, allowing users to choose how they interact with artificial intelligence on their iPhones.

A Major Shift for Siri

For years, Siri has been limited to Apple’s own ecosystem and capabilities. However, with iOS 27, Apple is expected to introduce a new “extensions” system that will enable integration with external AI tools such as ChatGPT and potentially other advanced conversational platforms.

This means users may no longer be restricted to a single AI assistant. Instead, they could select different AI services based on their needs, whether it is productivity, creativity, or general queries.

What This Means for Users

The upcoming update could dramatically improve how people use their iPhones daily. By allowing third-party AI integration, Siri could become more flexible, accurate, and context-aware. Users might be able to assign different AI models for specific tasks, such as writing emails, summarizing content, or answering complex questions.

This move also aligns with the growing competition in the AI space, where companies are racing to deliver smarter and more adaptable assistants.

Apple’s AI Strategy Is Evolving

Apple has traditionally taken a controlled approach to software and services. Opening Siri to third-party chatbots signals a shift toward a more open AI ecosystem. It also suggests that Apple is prioritizing user choice and advanced capabilities over maintaining a closed system.

If implemented well, this feature could position Siri as a central hub for AI interactions rather than just a standalone assistant.

Final Thoughts

The iOS 27 update could redefine the role of Siri on iPhones. By embracing third-party AI tools, Apple is taking a step toward making its devices more intelligent and user-centric. While official confirmation is still awaited, this development highlights Apple’s increasing focus on artificial intelligence and its future potential.

Source: Bloomberg

Also Read: Switch to Gemini App: Google Simplifies AI Migration with Smart Memory Import

Continue Reading