Prosecution Insights
Last updated: April 19, 2026
Application No. 18/995,240

HAND-EYE COLLABORATIVE AUDITORY COGNITIVE ASSESSMENT SYSTEM

Non-Final OA §102
Filed
Jan 16, 2025
Examiner
LEE, BRYAN MCALLISTER
Art Unit
3796
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Guangzhou Medical University
OA Round
1 (Non-Final)
93%
Grant Probability
Favorable
1-2
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 93% — above average
93%
Career Allow Rate
40 granted / 43 resolved
+23.0% vs TC avg
Moderate +11% lift
Without
With
+10.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
14 currently pending
Career history
57
Total Applications
across all art units

Statute-Specific Performance

§101
6.4%
-33.6% vs TC avg
§103
31.9%
-8.1% vs TC avg
§102
56.7%
+16.7% vs TC avg
§112
5.1%
-34.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 43 resolved cases

Office Action

§102
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Specification The abstract of the disclosure is objected to because the abstract exceeds 150 words in length. The abstract should be in narrative form and generally limited to a single paragraph within the range of 50 to 150 words in length. A corrected abstract of the disclosure is required and must be presented on a separate sheet, apart from any other text. See MPEP § 608.01(b). Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art reli1ed upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1 and 5-15 are rejected under 35 U.S.C. 102(a)(1) and 35 U.S.C 102(a)(2) as being anticipated by Li et al. (hereinafter ‘Li’, Foreign Patent CN 108962397 A). In regards to claim 1, Li discloses a hand-eye collaborative auditory cognitive assessment system, comprising: a speech recognition device configured to provide speech information to a participant, obtain a retold statement made by the participant based on the speech information, and obtain a speech retelling result (Fig. 1, ln. 267, "voice channel characteristics mainly represent the pronunciation disorder and dysarthric disorder, wherein the sound barrier has the volume, sound hoarseness, rough sound, breath sound, tremors and the like. it can extract the common features including but not limited to Jilter, Shimmer, SB-HNR, NHR, RPDE, and so on."), a hand-eye recognition device configured to provide visual information to the participant (ln. 176, "Further, after the user obtaining the diagnosis result, in order to make the user experience more real and profound, the decision-feedback module through multi-channel distribution optimizing method, namely, microphone combined with the display screen of the method the final diagnosis result through the pen and voice channel back to the user, the feedback makes the user feel the completion condition of the task."), obtain information of a hand movement executed by the participant based on the visual information, wherein the information of the hand movement comprises a single hand movement score and a comprehensive hand movement score, the single hand movement score is obtained when the hand-eye recognition device operates alone, and the comprehensive hand movement score is obtained when the speech recognition device and the hand-eye recognition device operate together (ln. 143, "writing characteristic can better reflect the user of motor function, normally expressed as hand tremor caused by problems such as pen position and pressure movement parameter change."), and process the single hand movement score and the comprehensive hand movement score to obtain a cognitive scoring result (ln. 225, "collecting user information, past medical history, meter checking part, firstly filling the user information and past medical history. wherein the meter checking is performed under the guidance of a professional doctor, doctor through the finish degree of the user giving the corresponding score, recording the score of the user."), the hand-eye recognition device comprises an image display module configured to display a moving region that the participant requires to follow with a hand, a hand information obtaining module configured to obtain precise touch time of the moving region when the image display module is touched (ln. 100, "Further, the data collecting unit when obtaining the handwriting and the audio task, enables the user in natural state, for formal before testing the user, requiring the user to pen interaction operation under the comfortable sitting condition…"), a timekeeping module configured to record total touch time of the image display module (ln. 145, "and the drawing features more closely with cognitive function relation of the user, normally expressed as user cognitive problem caused by finish time is abnormal, the error times increase."), a calculation module configured to calculate the single hand movement score or the comprehensive hand movement score based on the precise touch time and the total touch time, and a hand-eye control processing module configured to control startup and stop of the image display module, the hand information obtaining module, and the timekeeping module, and process the single hand movement score and the comprehensive hand movement score to obtain the cognitive scoring result when the hand-eye recognition device provides the visual information to the participant separately, the hand-eye control processing module calculates the single hand movement score based on the precise touch time and the total touch time, when the hand-eye recognition device provides the visual information to the participant and the speech recognition device provides the speech information to the participant, the hand-eye control processing module calculates the comprehensive hand movement score based on the precise touch time and the total touch time, and an assessment processing device configured to perform cognitive assessment on the participant based on the speech retelling result and the cognitive scoring result (ln. 261, "The pen interacting channel is mainly expressed on the writing and drawing feature, movement feature is mainly used as basic analysis unit to sample the sequence of strokes, normal extracting position of the handwriting, pressure and angle motion parameter, and using multiple analysis method processing corresponding motion parameters obtained by feature."). PNG media_image1.png 609 655 media_image1.png Greyscale In regards to claim 5, Li discloses that the speech recognition device comprises: a word and sentence database module configured to store a speech task that the participant requires to retell (ln. 243, "test corpus is comprised of a plurality of words, the actual test when random test sentence of a sentence in the corpus."), an audio output module configured to output a speech task that is to be retold by the participant, and a speech control processing module configured to control startup and stop of the word and sentence database module and the audio output module, obtain the retold statement, and compare the retold statement with the speech task output by the audio output module to obtain the speech retelling result (Claim 6: "The system according to claim 1, wherein, the said feature extracting module extracting voice channel characteristic according to sound and dysarthric disorders, extracted from pen interaction task drawing features as pen writing characteristics and channel characteristics, and extracting the characteristic strong pen interaction information correlation channel and the voice channel."). In regards to claim 6, Li discloses that the speech recognition device further comprises: a speech rate adjustment module configured to adjust an output speech rate of the speech task of the audio output module (ln. 271, "The duration can be calculated, stability of pronunciation, speech rate, pronunciation of sentence pattern feature."). In regards to claim 7, Li discloses that the word and sentence database module is configured to divide the speech task into a plurality of speech task groups based on an age bracket and the audio output module is configured to obtain age information of the participant, compare the age information with a plurality of age brackets in the word and sentence database module to obtain a speech task group of an appropriate age bracket, and output the speech task based on the speech task group (ln. 96, "Further, the data collecting unit records the age of the user, gender, education degree and so on their personal characteristic information; recording the physiological index and the disease state of user history checking body; because of different gauge test deflection point are different, under the guidance of the professional doctor to finish different mental state quantity meter testing."). In regards to claim 8, Li discloses that the word and sentence database module is configured to set a corresponding output speech rate of the speech task for each speech task group and the speech rate adjustment module is configured to obtain the age information of the participant, compare the age information with the age brackets in the word and sentence database module to obtain an output speech rate of a speech task of an appropriate age bracket, and adjust the output speech rate of the speech task of the audio output module (ln. 96, "Further, the data collecting unit records the age of the user, gender, education degree and so on their personal characteristic information; recording the physiological index and the disease state of user history checking body; because of different gauge test deflection point are different, under the guidance of the professional doctor to finish different mental state quantity meter testing."). In regards to claim 9, Li discloses that the image display module is configured to display a moving region that moves along an elliptical path (ln. 192, "the effect of the pen channel is mainly composed of writing state (kinematic feature) and a graphics drawing (figurate feature) of the result."). In regards to claims 11 and 12, Li discloses that the speech recognition device further comprises: a noise output module configured to output preset noise and provide a plurality of signal- to-noise ratio (SNR) environments to the participant and that when the noise output module outputs the noise, in the plurality of SNR environments, all speech tasks output by the audio output module are consistent, and all movement speeds of the moving region of the image display module are consistent (ln. 267, "voice channel characteristics mainly represent the pronunciation disorder and dysarthric disorder, wherein the sound barrier has the volume, sound hoarseness, rough sound, breath sound, tremors and the like. it can extract the common features including but not limited to Jilter, Shimmer, SB-HNR, NHR, RPDE, and so on. mainly dysarthric disorder is inaccurate pronunciation, word is not clear, sound adjusting abnormal problems."). In regards to claim 13, Li discloses that the hand-eye control processing module is configured to: when the single hand movement score and the comprehensive hand movement score are obtained, subtract the single hand movement score from a plurality of obtained comprehensive hand movement scores separately to obtain a plurality of listening effort scores, and then obtain the cognitive scoring result through analysis based on a change between the listening effort scores (ln. 225, "collecting user information, past medical history, meter checking part, firstly filling the user information and past medical history. wherein the meter checking is performed under the guidance of a professional doctor, doctor through the finish degree of the user giving the corresponding score, recording the score of the user."). In regards to claim 14, Li discloses that the speech task stored in the word and sentence database module comprises a plurality of types of long sentences, short sentences, and words and the speech task output by the audio output module is some long sentences, short sentences, or words obtained from the word and sentence database module (ln. 243, "test corpus is comprised of a plurality of words, the actual test when random test sentence of a sentence in the corpus."). In regards to claim 15, Li discloses that the image display module is configured to adjust touch area of the moving region (ln. 176, "Further, after the user obtaining the diagnosis result, in order to make the user experience more real and profound, the decision-feedback module through multi-channel distribution optimizing method, namely, microphone combined with the display screen of the method the final diagnosis result through the pen and voice channel back to the user, the feedback makes the user feel the completion condition of the task."). Allowable Subject Matter Claim 10 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to BRYAN M LEE whose telephone number is (703)756-1789. The examiner can normally be reached 9:00 am - 6:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Carl Layno can be reached at (571) 272-4949. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /B.M.L./Examiner, Art Unit 3796 /CARL H LAYNO/Supervisory Patent Examiner, Art Unit 3796
Read full office action

Prosecution Timeline

Jan 16, 2025
Application Filed
Jan 22, 2026
Non-Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12594427
Programming Closed-Loop Neural Stimulation Therapy
2y 5m to grant Granted Apr 07, 2026
Patent 12594428
HAPTICS-BASED RECHARGE ALIGNMENT FEEDBACK FOR IMPLANTABLE STIMULATOR
2y 5m to grant Granted Apr 07, 2026
Patent 12588959
SURGICAL INSTRUMENT WITH MAGNETIC SENSING
2y 5m to grant Granted Mar 31, 2026
Patent 12576273
TREATMENT SYSTEM USING VAGUS NERVE STIMULATION AND OPERATING METHOD THEREOF
2y 5m to grant Granted Mar 17, 2026
Patent 12576264
NEUROPROSTHESIS APPARATUS FOR STIMULATING LEG MOVEMENT
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
93%
Grant Probability
99%
With Interview (+10.7%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 43 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month