Prosecution Insights
Last updated: April 19, 2026
Application No. 18/938,774

CREATION OF WORKSITE DATA RECORDS VIA CONTEXT-ENHANCED USER DICTATION

Non-Final OA §101§103
Filed
Nov 06, 2024
Examiner
FEACHER, LORENA R
Art Unit
3625
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Weavix Inc.
OA Round
1 (Non-Final)
29%
Grant Probability
At Risk
1-2
OA Rounds
4y 8m
To Grant
61%
With Interview

Examiner Intelligence

Grants only 29% of cases
29%
Career Allow Rate
118 granted / 410 resolved
-23.2% vs TC avg
Strong +32% interview lift
Without
With
+32.3%
Interview Lift
resolved cases with interview
Typical timeline
4y 8m
Avg Prosecution
34 currently pending
Career history
444
Total Applications
across all art units

Statute-Specific Performance

§101
36.5%
-3.5% vs TC avg
§103
36.0%
-4.0% vs TC avg
§102
7.0%
-33.0% vs TC avg
§112
18.4%
-21.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 410 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Status of Claims This action is a first action on the merits in response to the application filed on 11/06/2024. Claims 1 – 26 are currently pending and have been examined in this application. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-26 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claim 1 recites: receiving, by a computing device from a user, an audio utterance, wherein the audio utterance is stored as a digital audio recording by the computing device; generating, by the computing device, a digital text file from the digital audio recording of the audio utterance; identifying, by the computing device, a worksite context for the audio utterance, wherein the worksite context for the audio utterance is based on information surrounding the user when the audio utterance is recorded; identifying, via an artificial intelligence (AI) model, a type of workflow record to generate for the audio utterance based on the digital text file; and generating a workflow record vial the AI model corresponding to the identified type of workflow record, wherein the AI model populates a plurality of data fields of the workflow record using one or more of the digital text file, the worksite context for the audio utterance, or a current task of the user. The limitations under its broadest reasonable interpretation covers Certain Methods of Organizing Human Activities related to managing person behaviors or interactions (following instructions), but for the recitation of generic computer components (e.g. a processor and memory). For example, receiving audio from a worker, generating a text file and applying an AI model to generate a work record involves managing behaviors of workers. Accordingly, the claim recites an abstract idea of Certain Methods of Organizing Human Activity. Additionally, the claim encompasses Mental Processes related to observation and evaluation of data. Independent Claims 10 and 19 substantially recite the subject matter of Claim 1 and also include the abstract ideas identified above. The dependent claims encompass the same abstract ideas. For instance, Claim 2 is directed to information surrounding the user when audio is recorded; Claim 3 is directed to AI model populating data field of a workflow (analysis); Claims 4 and 5 is directed to AI model is NLP and corrects semantic gaps (analysis; Claim 6 is directed to the computing device that receives audio utterance; Claim 7 is directed to AI model updating an existing workflow; Claim 8 is directed to detecting a second computing device (analysis); and Claim 9 is directed to second computing device is a sensor. Claims 9-18 and 20-26 substantially recite the subject matter of Claims 2-8 and encompass the same abstract ideas. The judicial exceptions are not integrated into a practical application. Claim 1 recites the additional elements of a smart radio and a computing device. Claim 10 recites the additional elements of a smart radio comprising at least one hardware processor and non-transitory memory. Claim 19 recites the additional element of a smart radio. Claims 8-9 recites the additional element of a second computing device including a sensor. These are generic computer components recited at a high level of generality as performing generic computer functions (see Spec ¶0151-¶0152). For instance, the steps of receiving audio utterance is gathering data. The steps related to generating a digital text file, identifying a worksite context, identifying a type of workflow and generating a workflow involve collecting and analyzing data. Examiner notes that there are no details on how the AI model is used or how the identifying and generating occurs. Therefore, broadest reasonable interpretation is analyzing data (e.g. observation and evaluation). Regarding Claim 9 a sensor that monitors a machine is generic data gathering activity. Each of the additional limitations is no more than mere instructions to apply the exception using a generic computer components (e.g. a processor). The combination of these additional elements is no more than mere instructions to apply the exception using a generic computer component (e.g. a processor). Therefore, the additional elements do not integrate the abstract ideas into a practical application because it does not impose meaningful limits on practicing the abstract idea. Therefore, the claims are directed to an abstract idea. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As stated above, the additional elements of a processor, a memory, a crm, etc. are considered generic computer components performing generic computer functions that amount to no more than instructions to implement the judicial exception. Mere, instructions to apply an exception using generic computer components cannot provide an inventive concept. The dependent claims when analyzed both individually and in combination are also held to be ineligible for the same reason above and the additional recited limitations fail to establish that the claims are not directed to an abstract. The additional limitations of the dependent claims when considered individually and as an ordered combination do not amount to significantly more than the abstract idea. Looking at these limitations as an ordered combination and individually adds nothing additional that is sufficient to amount to significantly more than the recited abstract idea because they simply provide instructions to use generic computer components, to "apply" the recited abstract idea. Thus, the elements of the claims, considered both individually and as an ordered combination, are not sufficient to ensure that the claim as a whole amounts to significantly more than the abstract idea itself. Therefore, Claims 1-26 are not patent eligible. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 (AIA ) are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 2, 6-11, 15-20, 22, 25 and 26 are rejected under 35 U.S.C. 103 as being unpatentable over Lee et al. (US 2024/0403613) in view of Swansey et al. (US 2019/0250882). Claim 1: Lee discloses: A method for improving creation of worksite data records with context-enhanced user dictation into a smart radio comprising: (see at least ¶0032-¶0033, utilizing generative AI in building management; see also ¶0054, mobile device for audio/video capability) receiving, by a computing device from a user, an audio utterance, wherein the audio utterance is stored as a digital audio recording by the computing device; (see at least ¶0054, receiving data from technicians including audio and video) identifying, by the computing device, a worksite context for the audio utterance, wherein the worksite context for the audio utterance is based on information surrounding the user when the audio utterance is recorded; (see at least ¶0032-¶0033, utilizing LLMs to analyze data such as receiving audio data from technicians to create reports, predictions, recommendations, etc.; see also ¶0038) identifying, via an artificial intelligence (AI) model, a type of workflow record to generate for the audio utterance based on the digital text file; and (see at least ¶0038, system can present to technician or manager a report regarding an action to be taken; see also ¶0034, the system can generate a prescriptions (recommendations) for a service technician; see also ¶0054) generating a workflow record vial the AI model corresponding to the identified type of workflow record, wherein the AI model populates a plurality of data fields of the workflow record using one or more of the digital text file, the worksite context for the audio utterance, or a current task of the user. (see at least ¶0034, based on technician feedback system generates a prescription or recommendation; see also ¶0035, processing unstructured data using LLMs to generate outputs regarding equipment servicing) While Lee discloses the above limitations, Lee does not explicitly recite following limitations; however, Swansey does disclose: receiving, by a computing device from a user, an audio utterance, wherein the audio utterance is stored as a digital audio recording by the computing device; (see at least ¶0020, receives voice data and converts the voice data to text data; see also ¶0084) generating, by the computing device, a digital text file from the digital audio recording of the audio utterance; (see at least ¶0020, receives voice data and converts the voice data to text data; see also ¶0084) Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, to combine the service recommendations using generative AI of Lee with the voice interaction including voice to text conversion of Swansey to assist workers in the field with data collection using hands-free voice recognition devices (see ¶0003). Claim 2: Lee and Swansey discloses claim 1. Swansey further discloses: wherein the information surrounding the user when the audio utterance is recorded includes one or more of: data corresponding to a geofence, wherein the geofence corresponds to a virtual perimeter or boundary defined using geographic coordinates; location of devices proximate to the smart radio; properties corresponding to devices proximate to the smart radio; additional devices proximate to the smart radio; user actions prior to the audio utterance; a job of the user; a role of the user; and communications to the user received by the smart radio. (see at least ¶0079-¶0080, when a user enters a geofence the application determines a current location of the user ) Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, to combine the service recommendations using generative AI of Lee and the voice interaction including voice to text conversion of Swansey with the to assist workers in the field with data collection using hands-free voice recognition devices (see ¶0003). Claim 6: Lee and Swansey disclose claim 1. Lee further discloses: wherein the computing device receives the audio utterance from the user further comprising: recording, via a microphone of the computing device, the audio utterance; or recording, via a recording device communicatively coupled to the computing device, the audio utterance, wherein the recording device transmits the audio utterance as the digital audio recording to the computing device. (see at least ¶0161, audio input via a microphone) Claim 7: Lee and Swansey disclose claim 1. Lee further discloses: wherein the AI model updates an existing workflow record corresponding to the identified type of workflow record, wherein the AI model populates the plurality of data fields of the existing workflow record using one or more of the audio utterance, the worksite context for the audio utterance, or the current task of the user. (see at least ¶0038, system can present to technician or manager a report regarding an action to be taken including requesting feedback to update; see also ¶0034, the system can generate a prescriptions (recommendations) for a service technician including updating based on new data; see also ¶0054) Claim 8: Lee and Swansey disclose claim 1. Lee further discloses: wherein the computing device is a first computing device, further comprising: detecting a presence of a second computing device; and (see at least ¶0033, sensor data is collected, the onsite technician is prompted for additional feedback) in response to detecting the presence, automatically transmitting a notification through a speaker of the first computing device prompting the user to provide an audio utterance. (see at least ¶0033, sensor data is collected, the onsite technician is prompted for additional feedback including speech/audio; see also ¶0038 ) Claim 9: Lee and Swansey disclose claim 8. Lee further discloses: wherein the second computing device is a sensor that monitors a status of a machine. (see at least ¶0033, sensor data; see also ¶0040) Claim 19: Lee discloses: A method for improving creation of worksite data records with context-enhanced user dictation into a smart radio comprising: (see at least ¶0032-¶0033, utilizing generative AI in building management; see also ¶0054, mobile device for audio/video capability) receiving, by a smart radio from a user, an audio utterance, wherein the audio utterance is stored as a digital audio recording by the smart radio; (see at least ¶0054, receiving data from technicians including audio and video) identifying, by the smart radio, a worksite context for the audio utterance, wherein the worksite context for the audio utterance is based on information surrounding the user when the audio utterance is recorded; identifying, via an artificial intelligence (AI) model, an existing workflow record to update based on the digital text file; and (see at least ¶0038, system can present to technician or manager a report regarding an action to be taken; see also ¶0034, the system can generate a prescriptions (recommendations) for a service technician; see also ¶0054) updating the existing workflow record via the AI model, wherein the AI model populates a plurality of data fields of the existing workflow record using one or more of the digital text file, the worksite context for the audio utterance, or a current task of the user. (see at least ¶0038, system can present to technician or manager a report regarding an action to be taken including requesting feedback to update; see also ¶0034, the system can generate a prescriptions (recommendations) for a service technician including updating based on new data; see also ¶0054) While Lee discloses the above limitations, Lee does not explicitly recite following limitations; however, Swansey does disclose: receiving, by a computing device from a user, an audio utterance, wherein the audio utterance is stored as a digital audio recording by the computing device; (see at least ¶0020, receives voice data and converts the voice data to text data; see also ¶0084) generating, by the computing device, a digital text file from the digital audio recording of the audio utterance; (see at least ¶0020, receives voice data and converts the voice data to text data; see also ¶0084) Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, to combine the service recommendations using generative AI of Lee with the voice interaction including voice to text conversion of Swansey to assist workers in the field with data collection using hands-free voice recognition devices (see ¶0003). Claim 20: While Lee and Swansey discloses claim 19, Lee does not explicitly disclose the following limitation; however, Swansey further discloses: wherein the information surrounding the user when the audio utterance is recorded includes one or more of: data corresponding to a geofence, wherein the geofence corresponds to a virtual perimeter or boundary defined using geographic coordinates; location of devices proximate to the smart radio; properties corresponding to devices proximate to the smart radio; additional devices proximate to the smart radio; user actions prior to the audio utterance; a job of the user; a role of the user; and communications to the user received by the smart radio. (see at least ¶0079-¶0080, when a user enters a geofence the application determines a current location of the user ) Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, to combine the service recommendations using generative AI of Lee and the voice interaction including voice to text conversion of Swansey with the to assist workers in the field with data collection using hands-free voice recognition devices (see ¶0003). Claim 24: Lee and Swansey disclose claim 19. Lee further discloses: wherein the smart radio receives the audio utterance from the user further comprising: recording, via a microphone of the smart radio, the audio utterance. (see at least ¶0161, audio input via a microphone) Claim 25: Lee and Swansey disclose claim 19. Lee further discloses: wherein the smart radio automatically transmits a notification through a speaker of the smart radio prompting the user to provide and audio utterance in response to detecting a presence of a computing device. (see at least ¶0033, sensor data is collected, the onsite technician is prompted for additional feedback including speech/audio; see also ¶0038 ) Claim 26: Lee and Swansey disclose claim 25. Lee further discloses: wherein the computing device is a sensor that monitors a status of a machine. (see at least ¶0033, sensor data; see also ¶0040) Claims 10, 11 and 15-18 for a system (Lee see ¶0042) substantially recites the subject matter of Claims 1, 2 and 6-9 and is rejected based on the same rationale. Claims 3, 12 and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Lee et al. (US 2024/0403613) in view of Swansey et al. (US 2019/0250882) further in view of Ignatyev et al. (US 2017/0235735). Claim 3: While Lee and Swansey disclose claim 1, neither explicitly disclose the following limitation; however, Ignatyev does disclose: wherein the AI model populates the plurality of data fields of the workflow record further comprising: (see at least ¶0025-¶0027, applying machine learning to identify the most likely candidate text or string for placement into a specified data field) segmenting, by the AI model, the digital text file into text strings, wherein each text string corresponds to a particular data field of the plurality of data fields; and (see at least ¶0025-¶0027, applying machine learning to identify the most likely candidate text or string for placement into a specified data field; see also ¶0040, infer or predict proper placement of unstructured data such as text phrases and segments of phrases) entering, by the AI model, the text strings into the plurality of data fields. (see at least ¶0025-¶0027, applying machine learning to identify the most likely candidate text or string for placement into a specified data field) Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, to combine the service recommendations using generative AI of Lee and the voice interaction including voice to text conversion of Swansey with the applying of machine learning to candidate text or string placement of Ignatyev to infer or predict proper placement of unstructured data such as text, phrases and segments (see Abstract). Claim 21: While Lee and Swansey disclose claim 19, neither explicitly disclose the following limitation; however, Ignatyev does disclose: wherein the AI model populates the plurality of data fields of the existing workflow record further comprising: (see at least ¶0025-¶0027, applying machine learning to identify the most likely candidate text or string for placement into a specified data field) segmenting, by the AI model, the digital text file into text strings, wherein each text string corresponds to a particular data field of the plurality of data fields; and(see at least ¶0025-¶0027, applying machine learning to identify the most likely candidate text or string for placement into a specified data field; see also ¶0040, infer or predict proper placement of unstructured data such as text phrases and segments of phrases) entering, by the AI model, the text strings into the plurality of data fields. (see at least ¶0025-¶0027, applying machine learning to identify the most likely candidate text or string for placement into a specified data field) Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, to combine the service recommendations using generative AI of Lee and the voice interaction including voice to text conversion of Swansey with the applying of machine learning to candidate text or string placement of Ignatyev to infer or predict proper placement of unstructured data such as text, phrases and segments (see Abstract). Claim 12 for a system substantially recites the subject matter of Claim 1 and is rejected based on the same rationale. Conclusion The prior art made of record and not relied upon is considered relevant but not applied: Heuman et al. (US 2024/0193554) discloses a proximity detection system receiving the proximity signal from the equipment tag and determining when the mobile computer is within the proximity range of the first equipment. The mobile application configured to display the maintenance information on a graphical interface when the mobile computer is within the proximity range. Jaggers et al. (US 2021/0398054) discloses a technician present at a service or repair event location, he can make verbal observations while engaging in a machine, device, or system diagnosis. He can also provide feedback about any information that he is provided related to the service or repair event. Any inquiry of a general nature or relating to the status of this application or concerning this communication or earlier communications from the Examiner should be directed to Renae Feacher whose telephone number is 571-270-5485. The Examiner can normally be reached Monday-Friday, 9:00 am - 5:00 pm. If attempts to reach the examiner by telephone are unsuccessful, the Examiner's supervisor, Beth Boswell can be reached at 571-272-6737. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://portal.uspto.gov/external/portal/pair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866.217.9197 (toll-free). Any response to this action should be mailed to: Commissioner of Patents and Trademarks Washington, D.C. 20231 or faxed to 571-273-8300. Hand delivered responses should be brought to the United States Patent and Trademark Office Customer Service Window: Randolph Building 401 Dulany Street Alexandria, VA 22314. /Renae Feacher/ Primary Examiner, Art Unit 3625
Read full office action

Prosecution Timeline

Nov 06, 2024
Application Filed
Dec 21, 2025
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12567014
PATRON PRESENCE BASED WORKFORCE CAPACITY NOTIFICATION
2y 5m to grant Granted Mar 03, 2026
Patent 12450669
PROCESS FOR SENSING-BASED CROP NUTRIENT MANAGEMENT USING LIMITING-RATE ESTIMATION AND YIELD RESPONSE PREDICTION
2y 5m to grant Granted Oct 21, 2025
Patent 12437320
SYSTEM AND METHOD OF GENERATING EXISTING CUSTOMER LEADS
2y 5m to grant Granted Oct 07, 2025
Patent 12387168
SYSTEMS AND METHODS FOR AUTOMATICALLY INVOKING A DELIVERY REQUEST FOR AN IN-PROGRESS ORDER
2y 5m to grant Granted Aug 12, 2025
Patent 12380511
ASSIGNING MOBILE DEVICE DATA TO A VEHICLE
2y 5m to grant Granted Aug 05, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
29%
Grant Probability
61%
With Interview (+32.3%)
4y 8m
Median Time to Grant
Low
PTA Risk
Based on 410 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month