Prosecution Insights
Last updated: April 17, 2026
Application No. 18/418,286

COMPUTER-IMPLEMENTED METHODS AND SYSTEMS FOR DISPLAYING TEXTUAL INFORMATION IN A NATIVE MOBILE APPLICATION

Non-Final OA §101§102§103
Filed
Jan 21, 2024
Examiner
WOZNIAK, JAMES S
Art Unit
2655
Tech Center
2600 — Communications
Assignee
unknown
OA Round
1 (Non-Final)
59%
Grant Probability
Moderate
1-2
OA Rounds
3y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 59% of resolved cases
59%
Career Allow Rate
227 granted / 385 resolved
-3.0% vs TC avg
Strong +40% interview lift
Without
With
+40.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
42 currently pending
Career history
427
Total Applications
across all art units

Statute-Specific Performance

§101
18.1%
-21.9% vs TC avg
§103
40.1%
+0.1% vs TC avg
§102
18.4%
-21.6% vs TC avg
§112
16.1%
-23.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 385 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Specification The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. The following title is suggested: --Computer-Implemented Methods and Systems for Display Aircraft Textual Information in a Native Mobile Application--. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 for being directed towards a patent ineligible judicial exception in the form of an abstract mental process under the broadest reasonable interpretation. Independent Claim 1 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims regard a process that, as drafted under its broadest reasonable interpretation, covers performance of the limitations as a mental process, but for the recitation of generic computing devices and software. In regards to the process of claim 1, the claimed functionality could be practiced as a mental process in the following manner: providing a mobile device configured for receiving communications (a user could manually/physically provide or make available a mobile device such as a smart phone or tablet); providing a display associated with the mobile device (a user could manually/physically provide or make available display of that a mobile device such as a smart phone or tablet); storing one or more keywords in a database associated with the native mobile application (a human can maintain a list of important aviation keywords on paper using a pen such as call signs and flight path information); receiving an audio input to the native mobile application (a human can listen for input audio in the environment of a running application); transcribing the audio input in the native mobile application and displaying a transcript on the display (a human can mentally evaluate what was said and write a transcript using pen and paper wherein the paper constitutes a manual display medium); and highlighting the one or more keywords of relevance (a human can use a pen to underline or a highlighter to highlight relevant terms in the human-produced transcript). This judicial exception is not integrated into a practical application. Outside of the identified abstract idea, the claimed invention only recites generic mobile computer, computer display, and software components which amount to no more than mere instructions to implement an otherwise abstract idea using a generic computer. The computer in the claims is only used as a tool to carry out an otherwise abstract idea by executing a program and not improved as a tool. Other than generic computing and display components, the claim does not include any limitations leftover after the abstract idea has been extracted that are directed towards a practical application or a technical improvement under step 2A prong 2 considerations. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The above identified additional generic computer components and software are no more than mere instructions to apply the exception using generic computer components that are well-known, routine, and conventional as is evidenced by Bancorp Services v. Sun Life (Fed. Cir. 2012) and Alice Corp. v. CLS Bank (2014). Furthermore, the use of a computer display to output results does not constitute an inventive concept. See TLI Communications, 823 F.3d at 612-13, 118 USPQ2d at 1747-48 (Fed. Cir. 2016) and Electric Power Group, LLC v. Alstom S.A., 830 F.3d 1350, 1354, 119 USPQ2d 1739, 1742 (Fed. Cir. 2016). Accordingly, claim 1 is not directed towards patent eligible subject matter under 35 U.S.C. 101. Independent Claim 11 recites a computer system for carrying out the method of claim 1, and thus, is rejected under similar rationale. Claim 11 also includes the generic computer components addressed in claim 1 as well an additional structure in the form of an interface for receiving audio. This interface acts as a physical replacement for the human ear used to listen to audio and is being used for an ordinary purpose of receiving audio. Moreover, such interfaces (headsets that are wired or Bluetooth enabled) are well-known in the art as is evidenced by Yamkovoy, et al. (U.S. PG Publication: 2022/0021999 A1- an electronic flight bag (EFG) having a wired and wireless (e.g., Bluetooth) connection is "commonly used," Paragraph 0034), Mitchell (U.S. PG Publication: 2013/0259261 A1- audio playback through wired or wireless/Bluetooth connections are "well known to persons skill in this field" and "readily available commercially," Paragraph 0041), and Zurek, et al. (U.S. PG Publication: 2006/0140422 A1- headsets that are connected via wire or via Bluetooth for communication are "known in the art" and common, Paragraph 0002). Independent Claim 18 recites a computer-readable medium storing computer-executable instructions for carrying out the method of claim 1, and thus, is rejected under similar rationale. Note that the generic computer component (i.e., computer-readable medium) was also addressed in the rejection of claim 1. The remaining dependent claims fail to add patent eligible subject matter to their respective parent claims: Claim 2 regards a software filter that automates a human process of being able to understand speech and ignore/overlook noise in a noise environment such as an aircraft cabin. Claims 3-4 and 19 narrow the data types that can be understood by a user in audio and transcribed. Claims 5-6 narrow the origin and type of an audio signal that can be heard and understood by a human. Claims 7-8 and 12-14 recite audio headsets and their connections that were addressed in the rejection of claim 11. Claim 9 regards generic software components that were addressed in the claim 1 rejection. Claims 10 and 20 regard generic computer software as address in the claim 1 rejection and a human decision to stop usage of additional applications when they are controlling a vehicle. Claims 15-18 regard well-known aircraft components used for their ordinary purpose on board an aircraft (communication, speed, altitude) and do not constitute an inventive concept. Moreover, these components are known in the art as is evidenced by Farmakis, et al. (U.S. Patent: 5,714,948- transponders are "known" (Col. 1, Lines 38-45) and "standard or conventional aircraft instrumentation" includes altimeters and airspeed indicators (Col. 20, Line 66- Col. 21, Line 16) and Parker (U.S. Patent: 4,490,117- "standard aircraft instruments" include altimeters and air speed indicators (Col. 2, Lines 58-68) and airplane transceivers are conventional (Col. 6, Lines 1-56)). Claims 18-20 regard an embodiment of the claimed invention directed towards a "computer-readable medium" storing computer executable instructions. Per MPEP 2106.03(II)-"A claim whose BRI covers both statutory and non-statutory embodiments embraces subject matter that is not eligible for patent protection and therefore is directed to non-statutory subject matter." Moreover, per this same section of the MPEP - "the BRI of machine-readable media can encompass non-statutory transitory forms of signal transmission, such as a propagating electrical or electromagnetic signal per se. See In re Nuijten, 500 F.3d 1346, 84 USPQ2d 1495 (Fed. Cir. 2007). When the BRI encompasses transitory forms of signal transmission, a rejection under 35 U.S.C. 101 as failing to claim statutory subject matter would be appropriate." Claims 18-20 recite such a computer-readable medium that has a broadest reasonable interpretation (BRI) that includes signals per se. The originally filed specification does not include a clear and unmistakable applicant's definition of the term of disavowal of claim scope to includes signals per se from the term. Accordingly, claims 18-20 are also not patent eligible because they are directed towards a signal per se under the BRI that does not fall within the four statutory categories under Step 1 analysis. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-6, 8, 11-12, and 15-19 are rejected under 35 U.S.C. 102(a)(1)/(a)(2) as being anticipated by Kanagarajan, et al. (U.S. PG Publication: 2023/0215278 A1). With respect to Claim 1, Kanagarajan discloses: A computer-implemented method for displaying textual information via a native mobile application, comprising: providing a mobile device configured for receiving communications (providing a mobile device (e.g., an electronic flight bag or mobile computer that can be placed aboard an aircraft) for receiving voice broadcast communications, Paragraphs 0035, 0042, 0045, 0049, 0053, and 0063); providing a display associated with the mobile device (provided display for the mobile device, Paragraphs 0035, 0040, 0042, and 0046; Fig. 1, Element 110); installing the native mobile application on the mobile device (Fig. 1, Element 102 showing the application installed on the mobile device 160 (e.g., the electronic flight bag); see also Paragraphs 0035, 0049, and paragraph 0059 discussing programming of the computer; note that in order for the program to reside on the mobile device, it must first be installed/programmed on that mobile device 160); running the native mobile application on the mobile device (executing program instructions including the native application running on the mobile device (e.g., electronic flight bag), Paragraphs 0020 and 0035); storing one or more keywords in a database associated with the native mobile application (the transcription application has a stored dataset related to "vocabulary of the aviation industry," Paragraph 0039); receiving an audio input to the native mobile application (receiving voice-based broadcast messages at the native application of the electronic device, Paragraph 0038-0039 and 0054); transcribing the audio input in the native mobile application and displaying a transcript on the display associated with the mobile device (executing voice to text transcription and then rendering the text messages on a display device, Paragraphs 0035, 0040, 0042, and 0046; see also Figs. 3-4); and highlighting the one or more keywords of relevance in the native mobile application in the transcript displayed on the display associated with the mobile device (highlighting text portions determined to be relevant (e.g., changes to the active flight plan, "relevant information" such as "based on an active flight plan and ATC conversation"), Paragraphs 0034, 0044, 0046, 0051, and 0056; see also the displayed highlighting depicted in Fig. 4, Element 304). With respect to Claim 2, Kanagarajan further discloses: The method of claim 1, further comprising providing a signal filter in the native mobile application adapted to optimize speech recognition functions in environments with background noise. (voice to text transcription in the native mobile application trained on "noise...of the aviation industry" so as to extract and accurately transcribe speech and not distortion/noise that leads to "inaccurate transcriptions," Paragraph 0039). With respect to Claim 3, Kanagarajan further discloses: The method of claim 1, wherein a keyword is a vessel call sign (note the aviation vocabulary noted in Paragraph 0039 and the transcription of airplane call signs (e.g., AXIS 65); see also aircraft identifier discussed in Paragraph 0040). With respect to Claim 4, Kanagarajan further discloses: The method of claim 3, wherein the vessel is an aircraft (note the aviation vocabulary noted in Paragraph 0039 and the transcription of airplane call signs (e.g., AXIS 65); see also aircraft identifier discussed in Paragraph 0040). With respect to Claim 5, Kanagarajan further discloses: The method of claim 1, wherein the audio input originates from a radio frequency communication (broadcast messages "over a radio channel," Paragraphs 0034 (discussing frequency tuning), 0038, and 0061). With respect to Claim 6, Kanagarajan further discloses: The method of claim 5, wherein the radio frequency communication is sent from air traffic control and other aircraft (communications originating from air traffic control (ATC) and "other aircraft," Paragraphs 0033, 0035, and 0040). With respect to Claim 8, Kanagarajan further discloses: The method of claim 5, wherein the audio input is received via a Bluetooth connection from a device on a line running to a headset or an audio speaker internal to a vessel (the audio input is received at an audio speaker internal to an aircraft (e.g., worn by a pilot), Paragraph 0038; Fig. 1, Element 151; See MPEP 2131 for a discussion of anticipation of a claim in the alternative if any of the alternatives are known in the prior art). With respect to Claim 11, Kanagarajan discloses A system for displaying textual information via a native mobile application, comprising: a mobile device configured for receiving communications (a mobile device (e.g., an electronic flight bag or mobile computer that can be placed aboard an aircraft) for receiving voice broadcast communications, Paragraphs 0035, 0042, 0045, 0049, 0053, and 0063); a display associated with the mobile device (provided display for the mobile device, Paragraphs 0035, 0040, 0042, and 0046; Fig. 1, Element 110); a database associated with the mobile device (the transcription application has a stored dataset related to "vocabulary of the aviation industry," Paragraph 0039); and an interface for receiving an audio input on the mobile device (receiving voice-based broadcast messages at the native application of the electronic device, Paragraph 0038-0039 and 0054; see also structural components in Fig. 1, receiver (140), headset speaker (151), and processing system (160)), wherein (see MPEP 2111.04(I) which notes that wherein clauses do not patentably limit structural limitations when they merely recite an intended use of a structure. The following limitations describe run-time operations that are intended to be performed by a system when it is in operation, however, this claim is directed towards a static system defined in terms of its components not an actively performed method claim. As such the wherein clause and the limitations that follow do not patentably limit the invention set forth in claim 11, however, have only been addressed in the interest of compact prosecution. Applicant should consider amending the claim to recite that the mobile device is "configured to" perform the steps following the wherein clause to effectively rely on the functional steps to patentably limit the mobile device and adjusting the language of the limitations accordingly (e.g., “an audio input is received…” would be changed to –receive the audio input by the native mobile application through the interface…--): a native mobile application is running on the mobile device (executing program instructions including the native application running on the mobile device (e.g., electronic flight bag), Paragraphs 0020 and 0035); one or more keywords of relevance are stored in the database associated with the mobile device (the transcription application has a stored dataset relevant to "vocabulary of the aviation industry,” Paragraph 0039); an audio input is received by the native mobile application running on the mobile device through the interface for receiving an audio input on the mobile device (receiving voice-based broadcast messages at the native application of the electronic device, Paragraph 0038-0039 and 0054); the audio input is transcribed in the native mobile application; the transcript of the audio input is displayed on the display associated with the mobile device (executing voice to text transcription and then rendering the text messages on a display device, Paragraphs 0035, 0040, 0042, and 0046; see also Figs. 3-4); and the one or more keywords of relevance are displayed on the display associated with the mobile device in the native mobile application running on the mobile device (highlighting aviation-related text portions determined to be relevant (e.g., changes to the active flight plan, "relevant information" such as "based on an active flight plan and ATC conversation"), Paragraphs 0034, 0044, 0046, 0051, and 0056; see also the displayed highlighting depicted in Fig. 4, Element 304). With respect to Claim 12, Kanagarajan further discloses: The system of claim 11, further comprising a headset for receiving an audio input (headset for receiving a voice broadcast, Paragraph 0038 and Fig. 1, Element 151 showing a headset speaker). With respect to Claim 15, Kanagarajan further discloses: The system of claim 11, further comprising a transponder for receiving radio frequency communications (receiver/transmitter for voice-based radio channel communications, Paragraphs 0034 and 0038; Fig. 1, Elements 140 and 142). With respect to Claim 16, Kanagarajan further discloses: The system of claim 15, wherein the transponder is a transponder on an aircraft (the system having the transponder is on board an aircraft, Paragraph 0035-0036 and Fig. 1, Element 10). With respect to Claim 17, Kanagarajan further discloses: The system of claim 11, further comprising one or more of an altimeter and an airspeed indicator (in addition to these components being inherently required in an aircraft, Kanagarajan describes altimeter and flying/air speed indications, Paragraph 0047). Claim 18 recites a computer-readable medium storing computer-executable instructions for carrying out the method of claim 1, and thus, is rejected under similar rationale. Moreover, Kanagarajan teaches method implementation as a program stored on a computer-readable medium (Paragraph 0065). With respect to Claim 19, Kanagarajan further discloses: The computer-readable medium according to claim 18, further comprising a database storing one or more vessel call signs (note the aviation vocabulary noted in Paragraph 0039 and the transcription of airplane call signs (e.g., AXIS 65); see also aircraft identifier discussed in Paragraph 0040). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 7 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over by Kanagarajan, et al. in view of Ruttler, et al. (U.S. PG Publication: 2017/0324437 A1). With respect to Claim 7, Kanagarajan discloses the method for transcribing aircraft radio broadcasts using a native application running on a mobile device aboard an aircraft as applied to Claim 3. While Kanagarajan also discloses that audio may be received by a headset (Paragraph 0038), Kanagarajan never specifically discloses that the headset has a wired connection branching off from a line running thereto. Ruttler, however, discloses that that a headset for receiving ATC broadcasts "can be wired" (Paragraphs 0068-0069). Kanagarajan and Ruttler are analogous art because they are from a similar field of endeavor in speech-to-text in an aircraft application. Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date to add the headset wiring of Ruttler to the headset for receiving ATC broadcasts taught by Kanagarajan to provide a predictable result in the form of a more stable audio connection that is less prone to interference and battery failures. Claim 13 contains subject matter similar to Claim 7, and thus, is rejected under similar rationale. Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over by Kanagarajan, et al. in view of Naiman, et al. (2023/0057709 A1). With respect to Claim 9, Kanagarajan discloses the method for transcribing aircraft broadcasts using a native application running on a mobile device aboard an aircraft wherein an ATC broadcast includes an aircraft call sign that is transcribed as applied to Claim 3. Kanagarajan does not teach that the ATC audio is received automatically by the mobile device via an application programming interface. Naiman, however, discloses that an API receives ATC audio for natural language conversation (0016, 0047, 0058, and 0065). Kanagarajan and Naiman are analogous art because they are from a similar field of endeavor in speech-to-text in an aircraft application. Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date to utilize the API taught by Naiman in the audio reception of call signs for transcription taught by Kanagarajan to achieve a predictable result of providing a known programming interface that allows for audio to be used across different applications (e.g., playback, transcription, or recording). Claims 10 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over by Kanagarajan, et al. in view of Crosbie, et al. (U.S. PG Publication: 2015/0099495). With respect to Claim 10, Kanagarajan discloses the method for transcribing aircraft broadcasts using a native application running on a mobile device aboard an aircraft as applied to Claim 1. Kanagarajan does not teach the native mobile application blocks other operations on the mobile device while the device is in motion. Crosbie, however, discloses vehicle levels that correlate to functions or characteristics of an app that are allowed or disallowed based upon vehicle operation such as movement wherein when device/vehicle movement is detected for certain applications, the application is disabled or terminated (Paragraphs 0002, 0011-0012, 0019-0020, 0065, 0111-0112, and 0115). Kanagarajan and Crosbie are analogous art because they are from a similar field of endeavor in speech-to-text in a mobile application. Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date to apply a safe-application permission while disabling or terminating other applications as taught by Crosbie in the native application for speech to text taught by Kanagarajan since that application transcribes and displays critical/necessary flight information to provide a predictable result of "reducing fears by vehicle manufacturers of the proliferation of execution of apps in vehicles while in motion" (Crosbie, Paragraph 0109). Claim 20 recites subject matter similar to Claim 10, and thus, is rejected under similar rationale. Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over by Kanagarajan, et al. in view of Tu, et al. (U.S. PG Publication: 2015/0208193 A1). With respect to Claim 14, Kanagarajan discloses the system for transcribing aircraft radio broadcasts using a native application running on a mobile device aboard an aircraft as applied to Claim 11. While Kanagarajan also discloses that audio may be received by a headset (Paragraph 0038), Kanagarajan never specifically discloses a Bluetooth connection from a device on a line running to the headset. Tu, however, discloses a Bluetooth connection from a device (Fig. 1, Element 111) having a line running to the headset (Fig. 1, Elements 120 and 125; see also Paragraphs 0029 and 0040 regarding Bluetooth connection and wired headset connections). Kanagarajan and Tu are analogous art because they are from a similar field of endeavor in voice communication devices using headsets. Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date to utilize the Bluetooth-wired setup taught by Kanagarajan in the headset taught by Kanagarajan to provide a predictable result of allowing wired headsets to allow greater freedom of movement by enabling wireless implementation (Tu, Paragraph 0006). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Kocour, et al ("Automatic processing pipeline for collecting and annotating air-traffic voice communication data," 2021)- teaches performing speech-to-text processing on pilot audio communications including filtering and call sign recognition/transcription (Abstract; Section 3.1, Pages 4-5; and Figs. 1-2). Nama, et al. (U.S. PG Publication: 2023/0115227 A1)- teaches voice to text transcription of ATC and/or pilot broadcasts that highlights transcribed critical information an relevant "nonstandard phraseology" (Paragraphs 0030, 0032, and 0039). Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAMES S WOZNIAK whose telephone number is (571)272-7632. The examiner can normally be reached 7-3, off alternate Fridays. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Flanders can be reached at (571)272-7516. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. JAMES S. WOZNIAK Primary Examiner Art Unit 2655 /JAMES S WOZNIAK/ Primary Examiner, Art Unit 2655
Read full office action

Prosecution Timeline

Jan 21, 2024
Application Filed
Oct 07, 2025
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597422
SPEAKING PRACTICE SYSTEM WITH RELIABLE PRONUNCIATION EVALUATION
2y 5m to grant Granted Apr 07, 2026
Patent 12586569
Knowledge Distillation with Domain Mismatch For Speech Recognition
2y 5m to grant Granted Mar 24, 2026
Patent 12511476
CONCEPT-CONDITIONED AND PRETRAINED LANGUAGE MODELS BASED ON TIME SERIES TO FREE-FORM TEXT DESCRIPTION GENERATION
2y 5m to grant Granted Dec 30, 2025
Patent 12512100
AUTOMATED SEGMENTATION AND TRANSCRIPTION OF UNLABELED AUDIO SPEECH CORPUS
2y 5m to grant Granted Dec 30, 2025
Patent 12475882
METHOD AND SYSTEM FOR AUTOMATIC SPEECH RECOGNITION (ASR) USING MULTI-TASK LEARNED (MTL) EMBEDDINGS
2y 5m to grant Granted Nov 18, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
59%
Grant Probability
99%
With Interview (+40.1%)
3y 7m
Median Time to Grant
Low
PTA Risk
Based on 385 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in for Full Analysis

Enter your email to receive a magic link. No password needed.

Free tier: 3 strategy analyses per month