Prosecution Insights
Last updated: April 19, 2026
Application No. 18/504,270

SIGN TO SPEECH DEVICE

Non-Final OA §101§103
Filed
Nov 08, 2023
Examiner
EGLOFF, PETER RICHARD
Art Unit
3715
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Voicesign LLC
OA Round
1 (Non-Final)
42%
Grant Probability
Moderate
1-2
OA Rounds
3y 5m
To Grant
75%
With Interview

Examiner Intelligence

Grants 42% of resolved cases
42%
Career Allow Rate
329 granted / 775 resolved
-27.5% vs TC avg
Strong +32% interview lift
Without
With
+32.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
40 currently pending
Career history
815
Total Applications
across all art units

Statute-Specific Performance

§101
29.1%
-10.9% vs TC avg
§103
38.1%
-1.9% vs TC avg
§102
15.9%
-24.1% vs TC avg
§112
14.2%
-25.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 775 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections – 35 USC § 101 2. 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Independent claim 1 recites a method comprising: receiving input for a new physical gesture based on electrical signals; generating a new physical gesture definition based on a received input; generating a numeric range for each active motion detection sensor associated with a new physical gesture definition based on the received input; comparing the numeric range of the new physical gesture definition to previously declared physical gesture definitions; alerting a user to remove one of the new physical gesture definition or one of the previously declared physical gesture definitions if the numeric range of the new physical gesture definition crosses the numeric range of any of the previously declared physical gesture definitions within a same cluster; providing an option for the user to discard one of the new physical gesture definition, or one of the previously declared physical gesture definitions, or keep both if the numeric range of the new physical gesture definition crosses the numeric range of any of the previously declared physical gesture definitions within a different cluster; declaring in a preview panel for the user the new physical gesture definition as an input for mapping; and mapping an audible sound to the new physical gesture definition The limitations of generating a physical gesture definition, generating a numeric range, comparing the range, alerting a user, providing an option to discard, declaring the definition as an input, and mapping a sound to the definition, as drafted, constitutes a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. That is, other than reciting a “device” comprising a “controller” configured to receive the physical motion as electrical signals, nothing in the claim elements precludes the steps from practically being performed in the mind. For example, but for the “controller” language, “generating”, “comparing”, “alerting”, “providing”, “declaring”, and “mapping” in the context of this claim encompasses a user manually generating a definition and numeric range, comparing the range to a previous range, receiving an alert and option, viewing a new definition, and mapping a sound, for example using a pen and paper expression of the definitions and mappings, or as a series of purely mental steps. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. This judicial exception is not integrated into a practical application. In particular, the claim recites receiving input for the new physical gesture based on electrical signals from a motion detection sensor. This is directed to insignificant extra-solution activity in the form of pre-solution data gathering. See MPEP 2106.05(g). The claim further recites using a controller to perform the claimed steps. The controller in these steps is recited at a high-level of generality (i.e., as a generic controller performing generic computer functions of receiving electrical signals, processing them, providing outputs, and mapping sounds to definitions) such that it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of receiving input based on electrical signals amounts to no more than adding insignificant extra-solution activity to the judicial exception (see MPEP 2106.05(g)), and using a processor to perform the claimed generating, comparing, alerting, providing, declaring and mapping steps amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible. Dependent claims 2-20 recite the same abstract idea as in claim 1, and only recite additional details of the generic controller obtaining user movement data from generic movement sensors (e.g. positional coordinates of various body parts), analyzing the data, and outputting results of the analysis. Therefore, these claims do not recite additional limitations sufficient to direct the claimed invention to significantly more. Claim Rejections - 35 USC § 103 3. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 4. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. 5. Claims 1, 3-5, 12-16 and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Perez (US 2010/0306716 A1) in view of Day et al. (US Patent No. 7,039,676 B1). Regarding claim 1, Perez discloses a device (see Par’s. 26-27) comprising at least one motion detection sensor (capture device 20) configured to detect physical motion of a user and generate electrical signals corresponding to the detected physical motion, a controller (computing environment 12) configured to receive the electrical signals from the at least one motion detection sensor and associate the electrical signals with a preprogrammed word, meaning, or sound, and program code stored on computer readable media and executable by a computer processor to define a new physical gesture of the user by: receiving input for the new physical gesture based on the electrical signals (Par’s. 45-46 – capture device 20 detects movements and generates gesture data for the movement); and generating a new physical gesture definition based on the received input (in the case the input is used for remapping – see e.g. Par’s. 25, 29); generating a numeric range for each active motion detection sensor associated with the new physical gesture definition based on the received input (numerical parameters such as depth, which may be detected by multiple camera sensors – Par’s. 41, 47); comparing the numeric range of the new physical gesture definition to previously declared physical gesture definitions (gesture recognition engine compares input gesture to previously declared gestures in library 190 – Par. 47); alerting the user to remove one of the new physical gesture definition or one of the previously declared physical gesture definitions if the numeric range of the new physical gesture definition crosses the numeric range of any of the previously declared physical gesture definitions within a same cluster (system produces output corresponding to whether the input data corresponds to the gesture data of a collection of gesture filters – Par. 48); providing an option for the user to discard one of the new physical gesture definition, or one of the previously declared physical gesture definitions, or keep both if the numeric range of the new physical gesture definition crosses the numeric range of any of the previously declared physical gesture definitions within a different cluster (supplement or overwrite – Par. 52; see also Par’s. 125, 134); and declaring in a preview panel for the user the new physical gesture definition as an input for mapping (Par. 52 – new gesture displayed in the form of an avatar). Perez does not appear to disclose the device is a sign to speech device, including an audio output device configured to generate audible sound for the preprogrammed word, meaning, or sound based on the electrical signals corresponding to the detected physical motion, and the mapping comprises mapping an audible sound to the new physical gesture definition. However, Day discloses a similar system for gesture recognition and allowing a user to define and map a new gesture (column 8, lines 33-52), wherein the gesture is mapped to an audible sound (column 6, lines 49-65). Accordingly, it would have been obvious to one skilled in the art before the effective filing date of the invention to modify Perez by utilizing the gesture definition method to map gestures to audible sounds, as taught by Day, to obtain predictable results of helping the user to use the gestures to communicate orally with other users. Regarding claims 3-5, 12-16 and 18-20, Perez in view of Day further discloses: the new physical gesture includes at least one handshape (Perez - Par. 35) (as per claim 3), the new physical gesture includes positional coordinates for at least one handshape relative to other position relational input from the user (Perez - Par. 35) (as per claim 4), the other position relational input from the user includes at least one of a position of the user's elbow (Perez - Par. 50) (as per claim 5), a no input algorithm executable by the computer processor to reduce or eliminate audio output when input from the at least one motion sensor does not correspond to any of the previously declared physical gesture definitions (Day - column 6, lines 49-65) (as per claim 12) an input sequencing algorithm executable by the computer processor to sequence input from the at least one motion detection sensor before matching the new physical gesture definition with audio output (Perez - collect sequence of movement data from multiple cameras - Par. 49) (as per claim 13), a feedback algorithm executable by the computer processor to generate a graphical representation corresponding to at least one position of the at least one motion detection sensor Perez - Par. 40) (as per claim 14), a feedback device including a display for outputting the graphical representation of the feedback for the user (Perez - display user feedback on a television screen – Par. 52) (as per claim 15), recognizing a user-defined neutral physical gesture by: comparing received input from the at least one motion detection sensor with a plurality of the previously declared physical gestures; and registering the received input as the user-defined neutral physical gesture when there is no match between the received input from the at least one motion detection sensor and the plurality of previously defined physical gestures (Perez - generating new set of gesture filter parameters – Par. 125) (as per claim 16), a new location definition algorithm that defines orientation ranges for the new location and adds the new location as a declared location for future mapping (Perez - Par. 125) (as per claim 18), sequence a plurality of the previously declared physical gestures based on user input identifying a direction toward a next sequence, and then a next set of input parameters (Perez - Par. 125) (as per claim 19), and add another sequence by: determining if the sequence follows “and then” logic or “or” logic, the “and then” logic sequences the mapping after the previous sequence; and mapping multiple outputs into one audible sound (Day, column 8, lines 33-52; column 6, lines 49-65) (as per claim 20). 6. Claims 2, 6, 8-11 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Perez (US 2010/0306716 A1) in view of Day et al. (US Patent No. 7,039,676 B1), and further in view of Bress (US 2012/0319940 A1). Regarding claims 2, 6, 8-11 and 17, the combination of Perez and Day does not explicitly disclose, but Bress does disclose in a similar system for inputting gestures and mapping gestures to audible sounds (Par. 103): the audible sound mapped to the new physical gesture is written to a wearable device (Par. 71, Fig. 1) (as per claim 2), at least one customizable device component selected from at least one finger band, at least one wristband, at least one elbow band, and at least one shoulder band, wherein the at least one customizable device component is custom-fit to the user, and wherein the at least one customizable device component houses the at least one motion detection sensor (Par. 71, 155) (as per claim 6), wrist rotation coordinates correspond to roll of the user's wrist and defines angular rotations of the user's wrist as inputs in mapping the new physical gesture of the user (Par. 80) (as per claim 8), palm tilt coordinates correspond to flexion of the user's palm as inputs in the new physical gesture of the user (Par. 87, 94) (as per claim 9), at least one wristband housing at least a first motion detection sensor (Par. 71), and at least one shoulder band housing at least one motion detection sensor (Par. 155), wherein relative positions of the at least one motion detector are calibrated against a position of the at least one wristband and a position of the at least shoulder band in order to detect relative motion of at least one other motion detection sensor so that the user can move about their environment to different locations and orientations without having to reset to a default position (e.g. motion of wrist above shoulder – Par. 81) (as per claim 10), at least one elbow band housing at least a position detection sensor, the at least one elbow band providing depth positions to the controller indicating how close the user's hand is to the user's body (Par. 38, 151) (as per claim 11), and a location setup algorithm that tracks an orientation of the user's wrist and defining regions of space to be used as location inputs within cluster mappings (Par. 80) (as per claim 17). It would have been obvious to one skilled in the art before the effective filing date of the invention to modify the combination of Perez and Day by incorporating these sensors and measurements taught by Bress, to provide predictable results of obtaining more accurate user movement measurements and providing the user a greater variety of potential gesture inputs. Claims Allowable over the Prior Art 7. Claim 7 distinguishes patentably from the prior art of record. Bress as discussed above discloses shoulder and wristband sensors to detect user movements and associate them with gestures; Lacey (US 2021/0263593 A1) discloses detecting user gestures using wrist and shoulder measurements, including yaw measurements (Par. 164). However, these references. alone or in combination, do not disclose the limitations of claim 7, including continuously updating shoulder or wristband yaw measurements so that all locations remain relative to the user’s torso regardless of user movement, in the manner claimed. Conclusion 8. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Browy et al. (US 2018/0075659 A1) discloses sensory eyewear. 9. Any inquiry concerning this communication or earlier communications from the examiner should be directed to PETER EGLOFF whose telephone number is (571)270-3548. The examiner can normally be reached on Monday - Friday 9:00 am - 5:00 pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xuan Thai can be reached at (571) 272-7147. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Peter R Egloff/ Primary Examiner, Art Unit 3715
Read full office action

Prosecution Timeline

Nov 08, 2023
Application Filed
Dec 13, 2025
Non-Final Rejection — §101, §103
Mar 04, 2026
Interview Requested
Mar 24, 2026
Examiner Interview Summary
Mar 24, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12573311
SMART E-LEARNING SYSTEM USING ADAPTIVE VIDEO LECTURE DELIVERY BASED ON ATTENTIVENESS OF THE VIEWER
2y 5m to grant Granted Mar 10, 2026
Patent 12555487
SYSTEMS AND METHODS FOR DYNAMIC MONITORING OF TEST TAKING
2y 5m to grant Granted Feb 17, 2026
Patent 12548469
METHODS AND SYSTEMS TO QUANTIFY CLINICAL CANNULATION SKILL
2y 5m to grant Granted Feb 10, 2026
Patent 12548466
ACCESSIBILITY-ENABLED APPLICATION SWITCHING
2y 5m to grant Granted Feb 10, 2026
Patent 12530987
METHOD FOR DETERMINING ASSEMBLY SEQUENCE AND GENERATING INSTRUCTION OF ASSEMBLING TOY
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
42%
Grant Probability
75%
With Interview (+32.1%)
3y 5m
Median Time to Grant
Low
PTA Risk
Based on 775 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month