The video below explains the recognition-automation integration enabled after downloading and installing the Custom Dictation Commands set from this website. It’s like “magic wrapped in bacon.”
(⬆ see above ) A video describing the design behind the partnership between Speech Recognition and Automation in macOS.
So how do custom Dictation Commands work?
The “magic” of User Commands is made possible by macOS automation, and more specifically, AppleScript and JavaScript libraries.
A script library is a script bundle file that can contain localizable resources, to enable it to work with multiple languages. Typically, a Script Library contains numerous scripting routines or handlers, each designed to perform a specific task.
And because these libraries are written in AppleScript or JavaScript they have access to all the important frameworks of macOS; as well as the ability to use Apple Events to query and control applications; and the ability execute commands with the UNIX command line.
When it comes to flexible command and control Script libraries deliver an impressive set of abilities throughout the OS and its applications, and they are a perfect partner for Speech Recognition.
So here’s how Speech Recognition and Automation work together:
You speak a command
that is recognized by the speech recognition framework
that runs a script
that executes a function from a script library
that directs a framework or application to perform a task.
For example:
When the command “make a new presentation” is spoken 1 it is recognized 2 and its indicated application context is confirmed 3 The speech system then executes the script assigned to that command 4
The executing script 4 contains a single line of JavaScript or AppleScript code 5 that loads and runs a handler from a script library 6 The handler’s code 7 directs Keynote to make a new document 8
The flexibility and power of this mechanism is what enables a single spoken command to perform what used to take many steps and procedures to accomplish.
NOTE: All of the scripts and command assignments are created for you automatically when you run the installer script for building the Speech Recognition preference file. You don’t have to do this by hand!
COMING SOON: How to make your own custom command sets
Detailed information and videos about creating and using AppleScript script libraries is available here.
JavaScript (JXA)
JavaScript for Automation (JXA) is an extension of JavaScript Core that enables JavaScript scripts to send Apple Events and to access the Cocoa frameworks of macOS.
Here is short video overview of JavaScript for Automation (JXA) and an archive containing the examples shown in the video.
DISCLAIMER
THIS WEBSITE IS NOT HOSTED BY APPLE INC.
Mention of third-party websites and products is for informational purposes only and constitutes neither an endorsement nor a recommendation. DICTATIONCOMMANDS.COM assumes no responsibility with regard to the selection, performance or use of information or products found at third-party websites. DICTATIONCOMMANDS.COM provides this only as a convenience to our users. DICTATIONCOMMANDS.COM has not tested the information found on these sites and makes no representations regarding its accuracy or reliability. There are risks inherent in the use of any information or products found on the Internet, and DICTATIONCOMMANDS.COM assumes no responsibility in this regard. Please understand that a third-party site is independent from DICTATIONCOMMANDS.COM and that DICTATIONCOMMANDS.COM has no control over the content on that website. Please contact the vendor for additional information.