Prosecution Insights
Last updated: April 19, 2026
Application No. 18/713,784

SPOKEN LANGUAGE UNDERSTANDING BY MEANS OF REPRESENTATIONS LEARNED UNSUPERVISED

Non-Final OA §103
Filed
May 28, 2024
Examiner
MONIKANG, GEORGE C
Art Unit
2692
Tech Center
2600 — Communications
Assignee
Corti Aps
OA Round
1 (Non-Final)
74%
Grant Probability
Favorable
1-2
OA Rounds
3y 0m
To Grant
82%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
701 granted / 941 resolved
+12.5% vs TC avg
Moderate +7% lift
Without
With
+7.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
48 currently pending
Career history
989
Total Applications
across all art units

Statute-Specific Performance

§101
3.9%
-36.1% vs TC avg
§103
58.6%
+18.6% vs TC avg
§102
22.5%
-17.5% vs TC avg
§112
4.0%
-36.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 941 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 3-15 are rejected under 35 U.S.C. 103 as being unpatentable over Kim et al, US Patent Pub. 20180322961 A1, in view of Kannan et al, US Patent Pub. 20190311814 A1, and further in view of Banerjee et al, WO 2023059746 A1. (The Kim et al reference is cited in IDS filed 02/28/2025) Re Claim 1, Kim et al discloses a method for determining and presenting a recommendation for assisting an interviewee party in a state in need of help solving a problem, such as experiencing a cardiac arrest or an acute injury or disease such as meningitis, during an interview between an interviewing party and said interviewee party (abstract; claim 1: dialogue between a user and another to carry out medical assessment), said method comprising: providing a sound recorder for capturing the sound of said interviewee party during said interview (fig. 1A: voice module 104a; para 0042: voice modules configured to receive/record audio data from a user), providing a processing unit, and a memory including a database having a feature model comprising a first statistically learned model, said feature model having a first input comprising a number of samples of the sound of said interviewee party (fig. 1B: “Medical condition diagnosis service 140; fig. 2; para 0065: acoustic feature), and comprising a first output being a vector or array (paras 0068, 0078, 0151, 0180: “support vector machine”); but fails to disclose said feature model being learned unsupervised, said database having a recommendation model comprising a second statistically learned model, said recommendation model having a second input comprising said first output, and a second output comprising said recommendation, a) inputting a number of samples of the sound of said interviewee party into said feature model and outputting said first output, b) inputting said first output into said recommendation model and determining said recommendation by means of said recommendation model, c) determining the confidence level of the output of said recommendation model, and providing a confidence level threshold, d) when the confidence level being greater than said confidence level threshold: presenting said recommendation to said interviewing party or interviewee party by means of an output device such as a display or a loudspeaker, or h) when the confidence level being smaller than said confidence level threshold: returning to step a) for inputting a subsequent number of samples of the sound of said interviewee party into said feature model. However, Kannan et al discloses a system for responding to healthcare inquiries, where treatments are recommended to a patient/user based on the confidence score being above a predetermined threshold (Kannan et al, paras 0132, 0134: recommendation is made via machine learned model) and generating more questions once the confidence score is below a predetermined threshold (Kannan et al, paras 0132, 0134: implying there is no recommendation made to a patient/user). It would have been obvious to modify Kim et al such that it includes a recommendation module that is able to recommend treatments when confidence score is above a predetermined threshold and no recommendation when confidence score is below predetermined threshold as taught in Kannan et al for the purpose of optimizing recommended treatments. Furthermore, Banerjee et al discloses a system for generating clinical recommendation for medical treatment (Banerjee et al, para 0129) that utilizes neural networks machine-learning models (Banerjee et al, para 0128); where the machine learning models can generate in an unsupervised fashion (Banerjee et al, para 0195). It would have been obvious to modify the machine learning model of Kannan et al as used to modify Kim et al such that it trains in an unsupervised fashion as taught in Banerjee et al for the purpose of compensating for hidden patterns and structures in unlabeled data without prior human guidance. Re Claim 3, the combined teachings of Kim et al, Kannan et al, Banerjee et al disclose the method according to claim 1, said set of recommendations comprising more than one recommendation (Kannan et al, para 0146: recommendation module may provide a ranked list of potential treatments implying more than one recommended treatment). Re Claim 4, the combined teachings of Kim et al, Kannan et al, Banerjee et al disclose the method according to claim 1, said recommendation model being learned supervised (Kannan et al, para 0139: supervised machine learning algorithm). Re Claim 5, the combined teachings of Kim et al, Kannan et al, Banerjee et al disclose the method according to claim 1, but fail to explicitly disclose said first input comprising a first vector or first array, and said first output comprising a second vector or second array, said second vector or second array smaller than said first vector or first array. Since there is no clear advantage as to one vector digits being bigger than another, it would have been obvious to make one vector digits bigger than another for the purpose of obtaining ideal data scaling. Claims 6-7 have been analyzed and rejected according to claim 5. Re Claim 8, the combined teachings of Kim et al, Kannan et al, Banerjee et al disclose the method according to claim 1, said feature model trained to determine the most coherent signals in the sound of said interviewee party (Kim et al, paras 0063-0064, 0077: speech recognition for determining coherent voice signals that minimize jitters etc). Re Claim 9, the combined teachings of Kim et al, Kannan et al, Banerjee et al disclose the method according to claim 1, but fail to explicitly disclose said vector or array comprising digits not constituting text nor audio. However, Official Notice is taken that both the concept and advantages of using digits for support vectors in machine learning is well known. It would have been obvious to modify the support vector of Kim et al to utilize digits for the purpose of being able to handle high dimensional data. Re Claim 10, the combined teachings of Kim et al, Kannan et al, Banerjee et al disclose the method according to claim 1, said feature model comprising an encoder model (Banerjee, para 0019: encoder and decoder neural networks). Re Claim 11, the combined teachings of Kim et al, Kannan et al, Banerjee et al disclose the method according to claim 1, said feature model comprising a decoder model (Banerjee, para 0019: encoder and decoder neural networks). Re Claim 12, the combined teachings of Kim et al, Kannan et al, Banerjee et al disclose the method according to claim 1, said feature model learned without ground truth supervision (Kannan et al, para 0139: supervised machine learning algorithm implies the use of ground truth labels since they are essential for supervised machine learning algorithms). Re Claim 13, the combined teachings of Kim et al, Kannan et al, Banerjee et al disclose the method according to claim 1, said subsequent number of samples being subsequent in time to said number of samples (Kim et al, fig. 1B: “Medical condition diagnosis service 140; fig. 2; para 0065: acoustic feature; para 0141: voice samples are naturally going to be received in sequential/subsequent order as they were uttered). Re Claim 14, the combined teachings of Kim et al, Kannan et al, Banerjee et al disclose the method according to claim 1, inputting the sound of said interviewee party into said processing unit as an electronic signal (Kim et al, fig. 1B: “Medical condition diagnosis service 140; fig. 2; para 0065: acoustic feature; para 0141: voice samples are naturally going to be received in sequential/subsequent order as they were uttered). Re Claim 15, the combined teachings of Kim et al, Kannan et al, Banerjee et al disclose the method according to claim 1, separating said electronic signal into a number of samples in the time domain or a domain representative of the frequency contents of said electronic signal (Kim et al, paras 0076-0078: signals broken down into time segments; time domain is selected from the Markush claim language). Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Kim et al, US Patent Pub. 20180322961 A1, in view of Kannan et al, US Patent Pub. 20190311814 A1, in view of Banerjee et al, WO 2023059746 A1, and further in view of Vallines et al, US Patent Pub. 20220101959 A1. Claim 2 has been analyzed and rejected according to claim 1; but fails to disclose each recommendation in said set of recommendations having a probability of being the true recommendation, each recommendation in said set of recommendations being a function of said state for solving said problem when presenting a recommendation from said set of recommendations and determining the respective recommendation in said set of recommendations having the highest, or second highest or third highest probability, and d) presenting said respective recommendation to said interviewing party or interviewee party by means of an output device such as a display or a loudspeaker. However, Vallines et al discloses a medical system that is able to analyze medical conditions and subsequently recommend treatments whereby the recommended treatments are presented with associated probabilities indicating the probability with which a recovery is to be expected (Vallines et al, para 0109: naturally since all probabilities are indicated, probability results will also indicate the highest probabilities to the lowest probabilities). Since Kannan et al teaches that its recommendation module presents a ranked list of potential treatments (Kannan et al, para 0146: recommendation module may provide a ranked list of potential treatments implying more than one recommended treatment), it would have been obvious to modify the recommendation module of Kannan et al, as used to modify Kim et al such that it also presents the probabilities of each recommended treatment as taught in Vallines et al for the purpose of providing user more detailed information pertaining to the recommended treatments for user to make optimal choice. Contact Any inquiry concerning this communication or earlier communications from the examiner should be directed to GEORGE C MONIKANG whose telephone number is (571)270-1190. The examiner can normally be reached Mon. - Fri., 9AM-5PM, ALT. Fridays off. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Carolyn R Edwards can be reached at 571-270-7136. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /GEORGE C MONIKANG/Primary Examiner, Art Unit 2692 12/14/2025
Read full office action

Prosecution Timeline

May 28, 2024
Application Filed
Dec 14, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604126
VEHICULAR MICROPHONE AND VEHICLE
2y 5m to grant Granted Apr 14, 2026
Patent 12596518
MICROPHONE INTERFACE, VEHICLE, CONNECTION METHOD, AND PRODUCTION METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12596888
CONTEXTUALIZATION OF GENERATIVE LANGUAGE MODELS BASED ON ENTITY RESOURCE IDENTIFIERS
2y 5m to grant Granted Apr 07, 2026
Patent 12598428
TRANSDUCER AND ELECTRONIC DEVICE
2y 5m to grant Granted Apr 07, 2026
Patent 12591749
MACHINE LEARNING SYSTEM FOR MULTI-DOMAIN LONG DOCUMENT CLUSTERING
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
74%
Grant Probability
82%
With Interview (+7.2%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 941 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month