Prosecution Insights
Last updated: April 19, 2026
Application No. 17/484,309

COMPUTER-IMPLEMENTED SYSTEM AND METHOD FOR DISTRIBUTED ACTIVITY DETECTION

Final Rejection §102§103§DP
Filed
Sep 24, 2021
Examiner
GLASSER, DARA J
Art Unit
2161
Tech Center
2100 — Computer Architecture & Software
Assignee
Palo Alto Research Center Incorporated
OA Round
5 (Final)
58%
Grant Probability
Moderate
6-7
OA Rounds
3y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 58% of resolved cases
58%
Career Allow Rate
95 granted / 163 resolved
+3.3% vs TC avg
Strong +54% interview lift
Without
With
+53.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
9 currently pending
Career history
172
Total Applications
across all art units

Statute-Specific Performance

§101
11.6%
-28.4% vs TC avg
§103
47.6%
+7.6% vs TC avg
§102
9.5%
-30.5% vs TC avg
§112
26.7%
-13.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 163 resolved cases

Office Action

§102 §103 §DP
DETAILED ACTION This communication is a Final Action in response to correspondence filed on December 10, 2025. Claims 21 and 27 have been amended. Claims 21-32 are pending in the application. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statements (IDS) submitted on September 22, 2025 and September 25, 2025 were filed after the mailing date of the Office action on August 20, 2025. The submissions are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 21-24 and 27-30 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 2, 11, and 12 of U.S. Patent No. 11,477,302 in view of Ellis et al. (US Publication No. 2013/0070928). Although the claims at issue are not identical, they are not patentably distinct from each other because Instant Application U.S. Patent No. 11,477,302 Claim Limitation Claim Limitation 21 A computer-implemented method for distributed activity detection, comprising: collecting, at a first time, contextual data for a user from a sensor of a mobile computing device; 1 A computer-implemented system for distributed activity detection, comprising: a server comprising a hardware processor to train models; at least one of a mobile computing device and a sensor device to: process contextual data for a user performing an activity; analyzing on the mobile computing device the contextual data using a model stored on the mobile computing device; extract features from the contextual data; compare the features with one or more of the models from the server and stored on the mobile computing device, wherein each model represents an activity; assign a confidence score to each model based on the comparison with the features, wherein the confidence score comprises a probability that model matches the features; identifying a first activity based on the analysis; determining that further contextual data has been collected, at a second time, for identifying a further activity, wherein the first time and the second time are different; 2 A system according to claim 1, wherein the mobile computing device further extracts further features from additional contextual data received, compares the further features generated from a set of stored models, assigns a confidence score to each model based on the comparison with the features, and assigns the activity associated with the model having the highest confidence score to the activity being performed by the user. 1 receive from a user of the mobile computing device or sensor device an identifier for the features only when the confidence scores for a match of the features with each of the models are low; transmit the identifier and features to the server only when the confidence scores for a match of the features with each of the models are low; and transmitting the further contextual data and a corresponding label from the mobile computing device to a server; analyzing on the server the further contextual data and the corresponding label using a model stored on the server; and the server to: receive from the mobile computing device or sensor device, the features and the identifier on the server only when the confidence scores for each model are low, train a new model on the server using the received features and the identifier; and providing information for display on the mobile computing device based on the analysis of the further contextual data and the corresponding label. send the new model to the mobile computing device or the sensor device, wherein providing the features and the identifier from the mobile computing device or sensor device to the server only when the confidence scores are low offsets processing expense of the server by performing activity detection on the mobile computing device or sensor device and training of new activity models on the server. Claims 1 and 2 of U.S. Patent No. 11,477,302 recite “process contextual data” and “further extracts further features from additional contextual data received.” Claims 1 and 2 of U.S. Patent No. 11,477,302 do not specifically disclose collecting, at a first time, contextual data; and determining that further contextual data has been collected, at a second time, wherein the first time and the second time are different. However, Ellis teaches collecting, at a first time, contextual data; and determining that further contextual data has been collected, at a second time, wherein the first time and the second time are different (see e.g., FIG. 8 for after an alert relating to the first instance of audio signals/ambient sound is provided to the user, a second instance of audio signals/ambient sound is received, from which audio features are extracted and [0032] for at 110, an audio signal being received by the application running on a mobile device, in some embodiments, the audio signal being received from a microphone of the mobile device, for example, the audio signal being received from a built-in microphone of a mobile phone or smartphone capturing ambient sound, as another example, the audio signal being received from a built-in microphone of a tablet computer, and as yet another example, the audio signal being received from a microphone of a special purpose device built for the purpose of recognizing non-speech audio events. A first instance of audio signals/ambient sound for a user is collected at a first time from a microphone of a mobile device. Audio features extracted from a second instance of audio signals/ambient sound are collected at a second time, different from the first time.). It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify the mobile computing device of claims 1 and 2 of U.S. Patent No. 11,477,302 to collect, at a first time, contextual data; and determine that further contextual data has been collected, at a second time, wherein the first time and the second time are different, as taught by Ellis, for the benefit of providing the user with audio event recognition in real-time (see e.g., Ellis, [0092]). Claim 1 of U.S. Patent No. 11,477,302 recites “send the new model to the mobile computing device or the sensor device.” Claim 1 of U.S. Patent No. 11,477,302 does not specifically disclose providing information for display on the mobile computing device. However, Ellis teaches providing information for display on the mobile computing device (see e.g., [0025] for the user being alerted with a visual alert on the screen of the mobile device to inform the user that a door knock sound has been detected. The server provides an alert for display on the mobile computing device.). It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify the mobile computing device of claim 1 of U.S. Patent No. 11,477,302 to provide information for display on the mobile computing device, as taught by Ellis, for the benefit of providing one or more alerts to deaf or hearing impaired individuals of audio events (see e.g., Ellis, [0005]). Claim 9 of U.S. Patent No. 11,477,302 recites “training the new model on the server using a combination of labeled population data indexed by activity label and the user's specific contextual data.” The claims of U.S. Patent No. 11,477,302 do not specifically disclose transmitting a corresponding label from the mobile computing device to a server; analyzing on the server the corresponding label using a model stored on the server; and providing information for display on the mobile computing device based on the analysis of the corresponding label. However, Ellis teaches transmitting a corresponding label from the mobile computing device to a server (see e.g., [0084] for the application executing on mobile device 510 extracting audio features from the audio signal and comparing the audio features to the classification models at 650 in accordance with 120 and 130 of process 100, determining if there is a match in accordance with 140 of process 100, and generating and outputting alerts in accordance with 150, 160, and 170 of process 100 and/or process 200 and upon generating an alert in response to a match between the audio features and one or more classification models, the alert and/or labeled audio features corresponding to the alert being transmitted to server 502); analyzing on the server the corresponding label using a model stored on the server (see e.g., [0084] for server 502 using the labeled audio features to update and/or improve the one or more classification models and for example, the labeled audio features being used to train one or more classification models); and providing information [alert] for display on the mobile computing device based on the analysis of the corresponding label (see e.g., [0045] for an alert including a visual alert, which can take the form of, for example, a flashing display, a blinking light (e.g., a mobile phone equipped with a camera flash can cause the flash to activate), an animation, any other suitable visual alert, or any suitable combination thereof and [0084] for the application executing on mobile device 510 extracting audio features from the audio signal and comparing the audio features to the classification models at 650 in accordance with 120 and 130 of process 100, determining if there is a match in accordance with 140 of process 100, and generating and outputting alerts in accordance with 150, 160, and 170 of process 100 and/or process 200, the labeled audio features being used to train one or more classification models, and these updated classification models being transmitted to the application executing on mobile device 510. The server provides an alert for display on the mobile device based on a server model updated via label analysis.). It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify the mobile computing device of claim 1 of U.S. Patent No. 11,477,302 to transmit a corresponding label from the mobile computing device to a server; analyze on the server the corresponding label using a model stored on the server; and provide information for display on the mobile computing device based on the analysis of the corresponding label, as taught by Ellis, for the benefit of providing alerts based on updated models (see e.g., Ellis, [0084]). As to claim 22, claims 1 and 2 of U.S. Patent No. 11,477,302 recite “contextual data.” Claims 1 and 2 of U.S. Patent No. 11,477,302 do not specifically disclose wherein the contextual data comprises a picture, a video, or a sound recording. However, Ellis teaches wherein the contextual data comprises a picture, a video, or a sound recording (see e.g., [0032] for at 110, an audio signal being received by the application running on a mobile device, in some embodiments, the audio signal being received from a microphone of the mobile device, for example, the audio signal being received from a built-in microphone of a mobile phone or smartphone capturing ambient sound, as another example, the audio signal being received from a built-in microphone of a tablet computer, and as yet another example, the audio signal being received from a microphone of a special purpose device built for the purpose of recognizing non-speech audio events. The contextual data comprises sound recordings.). It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify the mobile computing device of claims 1 and 2 of U.S. Patent No. 11,477,302 wherein the contextual data comprises a picture, a video, or a sound recording, as taught by Ellis, for the benefit of providing one or more alerts to deaf or hearing impaired individuals of audio events (see e.g., Ellis, [0005]). As to claim 23, claim 2 of U.S. Patent No. 11,477,302 recites “additional contextual data.” Claim 2 of U.S. Patent No. 11,477,302 does not specifically recite wherein the further contextual data comprises a picture, a video, or a sound recording. However, Ellis teaches wherein the further contextual data comprises a picture, a video, or a sound recording (see e.g., [0032] for at 110, an audio signal being received by the application running on a mobile device, in some embodiments, the audio signal being received from a microphone of the mobile device, for example, the audio signal being received from a built-in microphone of a mobile phone or smartphone capturing ambient sound, as another example, the audio signal being received from a built-in microphone of a tablet computer, and as yet another example, the audio signal being received from a microphone of a special purpose device built for the purpose of recognizing non-speech audio events. The contextual data comprises sound recordings.). It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify the mobile computing device of claim 2 of U.S. Patent No. 11,477,302 wherein the further contextual data comprises a picture, a video, or a sound recording, as taught by Ellis, for the benefit of providing one or more alerts to deaf or hearing impaired individuals of audio events (see e.g., Ellis, [0005]). As to claim 24, claims 1 and 2 of U.S. Patent No. 11,477,302 recite “the mobile computing device.” Claims 1 and 2 of U.S. Patent No. 11,477,302 do not specifically disclose wherein the mobile computing device is wearable on the user. However, Ellis teaches wherein the mobile computing device is wearable on the user (see e.g., [0033] for the audio signal being received from a microphone carried by a user or coupled to the body of a user in any suitable manner, and connected to the mobile device by a wire and as another example, the audio signal being received from a microphone coupled to any suitable platform, such as a purse or a bag). It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify the mobile computing device of claims 1 and 2 of U.S. Patent No. 11,477,302 wherein the mobile computing device is wearable on the user, as taught by Ellis, for the benefit of providing one or more alerts to deaf or hearing impaired individuals of audio events (see e.g., Ellis, [0005]). As to claim 28, claims 11 and 12 of U.S. Patent No. 11,477,302 recite “contextual data.” Claims 11 and 12 of U.S. Patent No. 11,477,302 do not specifically disclose wherein the contextual data comprises a picture, a video, or a sound recording. However, Ellis teaches wherein the contextual data comprises a picture, a video, or a sound recording (see e.g., [0032] for at 110, an audio signal being received by the application running on a mobile device, in some embodiments, the audio signal being received from a microphone of the mobile device, for example, the audio signal being received from a built-in microphone of a mobile phone or smartphone capturing ambient sound, as another example, the audio signal being received from a built-in microphone of a tablet computer, and as yet another example, the audio signal being received from a microphone of a special purpose device built for the purpose of recognizing non-speech audio events. The contextual data comprises sound recordings.). It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify the mobile computing device of claims 11 and 12 of U.S. Patent No. 11,477,302 wherein the contextual data comprises a picture, a video, or a sound recording, as taught by Ellis, for the benefit of providing one or more alerts to deaf or hearing impaired individuals of audio events (see e.g., Ellis, [0005]). As to claim 29, claim 12 of U.S. Patent No. 11,477,302 recites “additional contextual data.” Claim 12 of U.S. Patent No. 11,477,302 does not specifically recite wherein the further contextual data comprises a picture, a video, or a sound recording. However, Ellis teaches wherein the further contextual data comprises a picture, a video, or a sound recording (see e.g., [0032] for at 110, an audio signal being received by the application running on a mobile device, in some embodiments, the audio signal being received from a microphone of the mobile device, for example, the audio signal being received from a built-in microphone of a mobile phone or smartphone capturing ambient sound, as another example, the audio signal being received from a built-in microphone of a tablet computer, and as yet another example, the audio signal being received from a microphone of a special purpose device built for the purpose of recognizing non-speech audio events. The contextual data comprises sound recordings.). It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify the mobile computing device of claim 12 of U.S. Patent No. 11,477,302 wherein the further contextual data comprises a picture, a video, or a sound recording, as taught by Ellis, for the benefit of providing one or more alerts to deaf or hearing impaired individuals of audio events (see e.g., Ellis, [0005]). As to claim 30, claims 11 and 12 of U.S. Patent No. 11,477,302 recite “the mobile computing device.” Claims 11 and 12 of U.S. Patent No. 11,477,302 do not specifically disclose wherein the mobile computing device is wearable on the user. However, Ellis teaches wherein the mobile computing device is wearable on the user (see e.g., [0033] for the audio signal being received from a microphone carried by a user or coupled to the body of a user in any suitable manner, and connected to the mobile device by a wire and as another example, the audio signal being received from a microphone coupled to any suitable platform, such as a purse or a bag). It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify the mobile computing device of claims 11 and 12 of U.S. Patent No. 11,477,302 wherein the mobile computing device is wearable on the user, as taught by Ellis, for the benefit of providing one or more alerts to deaf or hearing impaired individuals of audio events (see e.g., Ellis, [0005]). Claims 25, 26, 31, and 32 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 2, 11, and 12 of U.S. Patent No. 11,477,302 in view of Ellis et al. (US Publication No. 2013/0070928) as applied to claims 21-24 and 27-30, and further in view of Ye et al. (US Publication No. 2016/0042539). As to claim 25, claims 1 and 2 of U.S. Patent No. 11,477,302 recite “the mobile computing device.” Claims 1 and 2 of U.S. Patent No. 11,477,302 in view of Ellis does not specifically disclose wherein the mobile computing device is wearable on a head of the user. However, Ye teaches wherein the mobile computing device is wearable on a head of the user (see e.g., [0035] for the embodiments of the invention being described mainly concerning a portable electronic device in the form of a mobile phone (also called “cell phone”), however, it should be understood that, the invention should not be limited to the circumstance of the mobile phone, but can relate to any types of appropriate electronic equipment, and examples of such electronic equipment including a smart watch, intelligent glasses, intelligent wig, a headset device, a wearable device, a fixed-line telephone, a media player, a game device, a PDA, a computer, a digital camera and the like [0058] for a device with a sound detector, e.g. an intelligent phone, a smart watch, a headset device and the like being used for testing sound, for example, exclamations such as “wow” made by the user when watching a wonderful part. The mobile device for detecting sound can be a headset device.). It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify the mobile computing device of claims 1 and 2 of U.S. Patent No. 11,477,302 in view of Ellis wherein the mobile computing device is wearable on a head of the user, as taught by Ye, for the benefit of testing two or more physiological parameters of the user at the same time (see e.g., Ye, [0057]). As to claim 26, claims 1 and 2 of U.S. Patent No. 11,477,302 recite “the mobile computing device.” Claims 1 and 2 of U.S. Patent No. 11,477,302 in view of Ellis does not specifically disclose wherein the mobile computing device is wearable on a wrist of the user. However, Ye teaches wherein the mobile computing device is wearable on a wrist of the user (see e.g., [0035] for the embodiments of the invention being described mainly concerning a portable electronic device in the form of a mobile phone (also called “cell phone”), however, it should be understood that, the invention should not be limited to the circumstance of the mobile phone, but can relate to any types of appropriate electronic equipment, and examples of such electronic equipment including a smart watch, intelligent glasses, intelligent wig, a headset device, a wearable device, a fixed-line telephone, a media player, a game device, a PDA, a computer, a digital camera and the like [0058] for a device with a sound detector, e.g. an intelligent phone, a smart watch, a headset device and the like being used for testing sound, for example, exclamations such as “wow” made by the user when watching a wonderful part. The mobile device for detecting sound can be a smart watch.). It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify the mobile computing device of claims 1 and 2 of U.S. Patent No. 11,477,302 in view of Ellis wherein the mobile computing device is wearable on a wrist of the user, as taught by Ye, for the benefit of testing two or more physiological parameters of the user at the same time (see e.g., Ye, [0057]). As to claim 31, claims 11 and 12 of U.S. Patent No. 11,477,302 recite “the mobile computing device.” Claims 11 and 12 of U.S. Patent No. 11,477,302 in view of Ellis does not specifically disclose wherein the mobile computing device is wearable on a head of the user. However, Ye teaches wherein the mobile computing device is wearable on a head of the user (see e.g., [0035] for the embodiments of the invention being described mainly concerning a portable electronic device in the form of a mobile phone (also called “cell phone”), however, it should be understood that, the invention should not be limited to the circumstance of the mobile phone, but can relate to any types of appropriate electronic equipment, and examples of such electronic equipment including a smart watch, intelligent glasses, intelligent wig, a headset device, a wearable device, a fixed-line telephone, a media player, a game device, a PDA, a computer, a digital camera and the like [0058] for a device with a sound detector, e.g. an intelligent phone, a smart watch, a headset device and the like being used for testing sound, for example, exclamations such as “wow” made by the user when watching a wonderful part. The mobile device for detecting sound can be a headset device.). It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify the mobile computing device of claims 11 and 12 of U.S. Patent No. 11,477,302 in view of Ellis wherein the mobile computing device is wearable on a head of the user, as taught by Ye, for the benefit of testing two or more physiological parameters of the user at the same time (see e.g., Ye, [0057]). As to claim 32, claims 11 and 12 of U.S. Patent No. 11,477,302 recite “the mobile computing device.” Claims 11 and 12 of U.S. Patent No. 11,477,302 in view of Ellis does not specifically disclose wherein the mobile computing device is wearable on a wrist of the user. However, Ye teaches wherein the mobile computing device is wearable on a wrist of the user (see e.g., [0035] for the embodiments of the invention being described mainly concerning a portable electronic device in the form of a mobile phone (also called “cell phone”), however, it should be understood that, the invention should not be limited to the circumstance of the mobile phone, but can relate to any types of appropriate electronic equipment, and examples of such electronic equipment including a smart watch, intelligent glasses, intelligent wig, a headset device, a wearable device, a fixed-line telephone, a media player, a game device, a PDA, a computer, a digital camera and the like [0058] for a device with a sound detector, e.g. an intelligent phone, a smart watch, a headset device and the like being used for testing sound, for example, exclamations such as “wow” made by the user when watching a wonderful part. The mobile device for detecting sound can be a smart watch.). It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify the mobile computing device of claims 11 and 12 of U.S. Patent No. 11,477,302 in view of Ellis wherein the mobile computing device is wearable on a wrist of the user, as taught by Ye, for the benefit of testing two or more physiological parameters of the user at the same time (see e.g., Ye, [0057]). Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 21, 22, 24, 27, 28, and 30 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Ellis et al. (US Publication No. 2013/0070928). As to claim 21, Ellis teaches a computer-implemented method for distributed activity detection, comprising: collecting, at a first time, contextual data [first instance of audio signals/ambient sound] for a user from a sensor [microphone] of a mobile computing device (see e.g., [0032] for at 110, an audio signal being received by the application running on a mobile device, in some embodiments, the audio signal being received from a microphone of the mobile device, for example, the audio signal being received from a built-in microphone of a mobile phone or smartphone capturing ambient sound, as another example, the audio signal being received from a built-in microphone of a tablet computer, and as yet another example, the audio signal being received from a microphone of a special purpose device built for the purpose of recognizing non-speech audio events and [0084] for mobile device 510 receiving the application and classification models from server 502 at 630 and after the application is received at mobile device 510, the application being installed and beginning capturing audio signals at 640 in accordance with 110 of process 100 described herein. A first instance of audio signals/ambient sound for a user is collected at a first time from a microphone of a mobile device.); analyzing on the mobile computing device the contextual data using a model stored on the mobile computing device (see e.g., [0084] for the application executing on mobile device 510 extracting audio features from the audio signal and comparing the audio features to the classification models at 650 in accordance with 120 and 130 of process 100 and determining if there is a match in accordance with 140 of process 100. The audio signals/ambient sound are analyzed on the mobile device using a model stored on the mobile device.); identifying a first activity [event] based on the analysis (see e.g., [0042] for at 150, the application identifying one or more non-speech audio events based on the comparison performed at 130 and the determination performed at 140, [0045] for at 160, the application generating an alert based on the identified non-speech audio events, and [0084] for generating and outputting alerts in accordance with 150, 160, and 170 of process 100 and/or process 200. A first event is identified based on the analysis.); determining that further contextual data [audio features extracted from a second instance of audio signals/ambient sound] has been collected, at a second time, for identifying a further activity [event], wherein the first time and the second time are different (see e.g., FIG. 6 for after an alert relating to the first instance of audio signals/ambient sound is provided to the user and used to update classification models, a second instance of audio signals/ambient sound is received, from which audio features are extracted, [0032] for at 110, an audio signal being received by the application running on a mobile device, in some embodiments, the audio signal being received from a microphone of the mobile device, for example, the audio signal being received from a built-in microphone of a mobile phone or smartphone capturing ambient sound, as another example, the audio signal being received from a built-in microphone of a tablet computer, and as yet another example, the audio signal being received from a microphone of a special purpose device built for the purpose of recognizing non-speech audio events, and [0084] for mobile device 510 receiving the application and classification models from server 502 at 630, after the application is received at mobile device 510, the application being installed and beginning capturing audio signals at 640 in accordance with 110 of process 100 described herein, the application executing on mobile device 510 extracting audio features from the audio signal and comparing the audio features to the classification models at 650 in accordance with 120 and 130 of process 100 and determining if there is a match in accordance with 140 of process 100. Audio features extracted from a second instance of audio signals/ambient sound are collected at a second time, different from the first time, for identifying a further event.); transmitting the further contextual data and a corresponding label from the mobile computing device to a server (see e.g., [0084] for upon generating an alert in response to a match between the audio features and one or more classification models, the alert and/or labeled audio features corresponding to the alert being transmitted to server 502. The audio features extracted from a second instance of audio signals/ambient sound and a corresponding label are transmitted from the mobile device to a server.); analyzing on the server the further contextual data and the corresponding label using a model stored on the server (see e.g., [0084] server 502 using the labeled audio features to update and/or improve the one or more classification models and for example, the labeled audio features being used to train one or more classification models. The server analyzes the audio features and corresponding label extracted from a second instance of audio signals/ambient sound using a model stored on the server.); and providing information [alert] for display on the mobile computing device based on the analysis of the further contextual data and the corresponding label (see e.g., [0045] for an alert including a visual alert, which can take the form of, for example, a flashing display, a blinking light (e.g., a mobile phone equipped with a camera flash can cause the flash to activate), an animation, any other suitable visual alert, or any suitable combination thereof and [0084] for the application executing on mobile device 510 extracting audio features from the audio signal and comparing the audio features to the classification models at 650 in accordance with 120 and 130 of process 100, determining if there is a match in accordance with 140 of process 100, and generating and outputting alerts in accordance with 150, 160, and 170 of process 100 and/or process 200, the labeled audio features being used to train one or more classification models, and these updated classification models being transmitted to the application executing on mobile device 510. The server provides an alert for display on the mobile device based on a server model updated via labeled audio feature analysis.). As to claim 22, Ellis teaches the method of claim 21, wherein the contextual data comprises a picture, a video, or a sound recording (see e.g., [0032] for at 110, an audio signal being received by the application running on a mobile device, in some embodiments, the audio signal being received from a microphone of the mobile device, for example, the audio signal being received from a built-in microphone of a mobile phone or smartphone capturing ambient sound, as another example, the audio signal being received from a built-in microphone of a tablet computer, and as yet another example, the audio signal being received from a microphone of a special purpose device built for the purpose of recognizing non-speech audio events. The contextual data comprises sound recordings.). As to claim 24, Ellis teaches the method of claim 21, wherein the mobile computing device is wearable on the user (see e.g., [0033] for the audio signal being received from a microphone carried by a user or coupled to the body of a user in any suitable manner, and connected to the mobile device by a wire and as another example, the audio signal being received from a microphone coupled to any suitable platform, such as a purse or a bag). As to claim 27, Ellis teaches a computer-implemented system for distributed activity detection, comprising: one or more processors (see e.g., [0079] for mobile device 510 including a processor 512); and a non-transitory computer-readable medium coupled to the one or more processors having instructions stored thereon that, when executed by the one or more processors, cause the one or more processors to perform a method for generating a recommendation (see e.g., [0079] for memory 518 including a storage device (such as a computer-readable medium) for storing a computer program for controlling processor 512. Examiner notes that “a method for generating a recommendation” is an intended use and is not given patentable weight. This portion of the preamble does not limit the structure of the claimed invention, is not necessary to give meaning to the remainder of the claim, and would have no effect on the body of the claim if removed.), the method comprising: collecting, at a first time, contextual data [first instance of audio signals/ambient sound] for a user from a sensor [microphone] of a mobile computing device (see e.g., [0032] for at 110, an audio signal being received by the application running on a mobile device, in some embodiments, the audio signal being received from a microphone of the mobile device, for example, the audio signal being received from a built-in microphone of a mobile phone or smartphone capturing ambient sound, as another example, the audio signal being received from a built-in microphone of a tablet computer, and as yet another example, the audio signal being received from a microphone of a special purpose device built for the purpose of recognizing non-speech audio events and [0084] for mobile device 510 receiving the application and classification models from server 502 at 630 and after the application is received at mobile device 510, the application being installed and beginning capturing audio signals at 640 in accordance with 110 of process 100 described herein. A first instance of audio signals/ambient sound for a user is collected at a first time from a microphone of a mobile device.); analyzing on the mobile computing device the contextual data using a model stored on the mobile computing device (see e.g., [0084] for the application executing on mobile device 510 extracting audio features from the audio signal and comparing the audio features to the classification models at 650 in accordance with 120 and 130 of process 100 and determining if there is a match in accordance with 140 of process 100. The audio signals/ambient sound are analyzed on the mobile device using a model stored on the mobile device.); identifying a first activity [event] based on the analysis (see e.g., [0042] for at 150, the application identifying one or more non-speech audio events based on the comparison performed at 130 and the determination performed at 140, [0045] for at 160, the application generating an alert based on the identified non-speech audio events, and [0084] for generating and outputting alerts in accordance with 150, 160, and 170 of process 100 and/or process 200. A first event is identified based on the analysis.); determining that further contextual data [audio features extracted from a second instance of audio signals/ambient sound], at a second time, has been collected for identifying a further activity [event], wherein the first time and the second time are different (see e.g., FIG. 6 for after an alert relating to the first instance of audio signals/ambient sound is provided to the user and used to update classification models, a second instance of audio signals/ambient sound is received, from which audio features are extracted, [0032] for at 110, an audio signal being received by the application running on a mobile device, in some embodiments, the audio signal being received from a microphone of the mobile device, for example, the audio signal being received from a built-in microphone of a mobile phone or smartphone capturing ambient sound, as another example, the audio signal being received from a built-in microphone of a tablet computer, and as yet another example, the audio signal being received from a microphone of a special purpose device built for the purpose of recognizing non-speech audio events, and [0084] for mobile device 510 receiving the application and classification models from server 502 at 630, after the application is received at mobile device 510, the application being installed and beginning capturing audio signals at 640 in accordance with 110 of process 100 described herein, the application executing on mobile device 510 extracting audio features from the audio signal and comparing the audio features to the classification models at 650 in accordance with 120 and 130 of process 100 and determining if there is a match in accordance with 140 of process 100. Audio features extracted from a second instance of audio signals/ambient sound are collected at a second time, different from the first time, for identifying a further event.); transmitting the further contextual data and a corresponding label from the mobile computing device to a server (see e.g., [0084] for upon generating an alert in response to a match between the audio features and one or more classification models, the alert and/or labeled audio features corresponding to the alert being transmitted to server 502. The audio features extracted from a second instance of audio signals/ambient sound and a corresponding label are transmitted from the mobile device to a server.); analyzing on the server the further contextual data and the corresponding label using a model stored on the server (see e.g., [0084] server 502 using the labeled audio features to update and/or improve the one or more classification models and for example, the labeled audio features being used to train one or more classification models. The server analyzes the audio features and corresponding label extracted from a second instance of audio signals/ambient sound using a model stored on the server.); and providing information [alert] for display on the mobile computing device based on the analysis of the further contextual data and the corresponding label (see e.g., [0045] for an alert including a visual alert, which can take the form of, for example, a flashing display, a blinking light (e.g., a mobile phone equipped with a camera flash can cause the flash to activate), an animation, any other suitable visual alert, or any suitable combination thereof and [0084] for the application executing on mobile device 510 extracting audio features from the audio signal and comparing the audio features to the classification models at 650 in accordance with 120 and 130 of process 100, determining if there is a match in accordance with 140 of process 100, and generating and outputting alerts in accordance with 150, 160, and 170 of process 100 and/or process 200, the labeled audio features being used to train one or more classification models, and these updated classification models being transmitted to the application executing on mobile device 510. The server provides an alert for display on the mobile device based on a server model updated via labeled audio feature analysis.). As to claim 28, Ellis teaches the system of claim 27 wherein the contextual data comprises a picture, a video, or a sound recording (see e.g., [0032] for at 110, an audio signal being received by the application running on a mobile device, in some embodiments, the audio signal being received from a microphone of the mobile device, for example, the audio signal being received from a built-in microphone of a mobile phone or smartphone capturing ambient sound, as another example, the audio signal being received from a built-in microphone of a tablet computer, and as yet another example, the audio signal being received from a microphone of a special purpose device built for the purpose of recognizing non-speech audio events. The contextual data comprises sound recordings.). As to claim 30, Ellis teaches the method of claim 27, wherein the mobile computing device is wearable on the user (see e.g., [0033] for the audio signal being received from a microphone carried by a user or coupled to the body of a user in any suitable manner, and connected to the mobile device by a wire and as another example, the audio signal being received from a microphone coupled to any suitable platform, such as a purse or a bag). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 23 and 29 are rejected under 35 U.S.C. 103 as being unpatentable over the embodiment depicted in FIG. 8 of Ellis et al. (US Publication No. 2013/0070928) as applied to claims 21, 22, 24, 27, 28, and 30 above, and further in view of the embodiment depicted in FIG. 7 of Ellis et al. (US Publication No. 2013/0070928). As to claim 23, the limitations of parent claim 21 have been discussed above. The embodiment depicted in FIG. 8 of Ellis does not specifically disclose wherein the further contextual data comprises a picture, a video, or a sound recording. However, the embodiment depicted in FIG. 7 of Ellis teaches wherein the further contextual data comprises a picture, a video, or a sound recording (see e.g., [0085] for mobile device 510 receiving the application at 720, and starting to receive audio and transmit it to the server 502 at 730 and in some embodiments, audio being transmitted to the server in response to some property of the received audio being over a threshold, as described in relation to 330 in FIG. 3. The contextual data comprises sound recordings.). It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify the application of the embodiment depicted in FIG. 8 of Ellis wherein the further contextual data comprises a picture, a video, or a sound recording, as taught by the embodiment depicted in FIG. 7 of Ellis, for the benefit of only analyzing audio over a quality threshold (see e.g., Ellis, [0062]). As to claim 29, the limitations of parent claim 27 have been discussed above. The embodiment depicted in FIG. 8 of Ellis does not specifically disclose wherein the further contextual data comprises a picture, a video, or a sound recording. However, the embodiment depicted in FIG. 7 of Ellis teaches wherein the further contextual data comprises a picture, a video, or a sound recording (see e.g., [0085] for mobile device 510 receiving the application at 720, and starting to receive audio and transmit it to the server 502 at 730 and in some embodiments, audio being transmitted to the server in response to some property of the received audio being over a threshold, as described in relation to 330 in FIG. 3. The contextual data comprises sound recordings.). It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify the application of the embodiment depicted in FIG. 8 of Ellis wherein the further contextual data comprises a picture, a video, or a sound recording, as taught by the embodiment depicted in FIG. 7 of Ellis, for the benefit of only analyzing audio over a quality threshold (see e.g., Ellis, [0062]). Claims 25, 26, 31, and 32 are rejected under 35 U.S.C. 103 as being unpatentable over Ellis et al. (US Publication No. 2013/0070928) as applied to claims 21, 22, 24, 27, 28, and 30 above, and further in view of Ye et al. (US Publication No. 2016/0042539). As to claim 25, the limitations of parent claims 21 and 24 have been discussed above. Ellis does not specifically disclose wherein the mobile computing device is wearable on a head of the user. However, Ye teaches wherein the mobile computing device is wearable on a head of the user (see e.g., [0035] for the embodiments of the invention being described mainly concerning a portable electronic device in the form of a mobile phone (also called “cell phone”), however, it should be understood that, the invention should not be limited to the circumstance of the mobile phone, but can relate to any types of appropriate electronic equipment, and examples of such electronic equipment including a smart watch, intelligent glasses, intelligent wig, a headset device, a wearable device, a fixed-line telephone, a media player, a game device, a PDA, a computer, a digital camera and the like [0058] for a device with a sound detector, e.g. an intelligent phone, a smart watch, a headset device and the like being used for testing sound, for example, exclamations such as “wow” made by the user when watching a wonderful part. The mobile device for detecting sound can be a headset device.). It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify the mobile device of Ellis wherein the mobile computing device is wearable on a head of the user, as taught by Ye, for the benefit of two or more physiological parameters of the user being tested at the same time (see e.g., Ye, [0057]). As to claim 26, the limitations of parent claims 21 and 24 have been discussed above. Ellis does not specifically disclose wherein the mobile computing device is wearable on a wrist of the user. However, Ye teaches wherein the mobile computing device is wearable on a wrist of the user (see e.g., [0035] for the embodiments of the invention being described mainly concerning a portable electronic device in the form of a mobile phone (also called “cell phone”), however, it should be understood that, the invention should not be limited to the circumstance of the mobile phone, but can relate to any types of appropriate electronic equipment, and examples of such electronic equipment including a smart watch, intelligent glasses, intelligent wig, a headset device, a wearable device, a fixed-line telephone, a media player, a game device, a PDA, a computer, a digital camera and the like [0058] for a device with a sound detector, e.g. an intelligent phone, a smart watch, a headset device and the like being used for testing sound, for example, exclamations such as “wow” made by the user when watching a wonderful part. The mobile device for detecting sound can be a smart watch.). It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify the mobile device of Ellis wherein the mobile computing device is wearable on a wrist of the user, as taught by Ye, for the benefit of two or more physiological parameters of the user being tested at the same time (see e.g., Ye, [0057]). As to claim 31, the limitations of parent claims 27 and 30 have been discussed above. Ellis does not specifically disclose wherein the mobile computing device is wearable on a head of the user. However, Ye teaches wherein the mobile computing device is wearable on a head of the user (see e.g., [0035] for the embodiments of the invention being described mainly concerning a portable electronic device in the form of a mobile phone (also called “cell phone”), however, it should be understood that, the invention should not be limited to the circumstance of the mobile phone, but can relate to any types of appropriate electronic equipment, and examples of such electronic equipment including a smart watch, intelligent glasses, intelligent wig, a headset device, a wearable device, a fixed-line telephone, a media player, a game device, a PDA, a computer, a digital camera and the like [0058] for a device with a sound detector, e.g. an intelligent phone, a smart watch, a headset device and the like being used for testing sound, for example, exclamations such as “wow” made by the user when watching a wonderful part. The mobile device for detecting sound can be a headset device.). It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify the mobile device of Ellis wherein the mobile computing device is wearable on a head of the user, as taught by Ye, for the benefit of two or more physiological parameters of the user being tested at the same time (see e.g., Ye, [0057]). As to claim 32, the limitations of parent claims 27 and 30 have been discussed above. Ellis does not specifically disclose wherein the mobile computing device is wearable on a wrist of the user. However, Ye teaches wherein the mobile computing device is wearable on a wrist of the user (see e.g., [0035] for the embodiments of the invention being described mainly concerning a portable electronic device in the form of a mobile phone (also called “cell phone”), however, it should be understood that, the invention should not be limited to the circumstance of the mobile phone, but can relate to any types of appropriate electronic equipment, and examples of such electronic equipment including a smart watch, intelligent glasses, intelligent wig, a headset device, a wearable device, a fixed-line telephone, a media player, a game device, a PDA, a computer, a digital camera and the like [0058] for a device with a sound detector, e.g. an intelligent phone, a smart watch, a headset device and the like being used for testing sound, for example, exclamations such as “wow” made by the user when watching a wonderful part. The mobile device for detecting sound can be a smart watch.). It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify the mobile device of Ellis wherein the mobile computing device is wearable on a wrist of the user, as taught by Ye, for the benefit of two or more physiological parameters of the user being tested at the same time (see e.g., Ye, [0057]). Response to Arguments Applicant's arguments filed December 10, 2025 have been fully considered but they are not persuasive. On page 7 of Applicant’s Response, Applicant argues: Ellis discloses, for example, that "if there is not a match at 820, mobile device 510 can proceed to 840 where the audio features extracted at 810 can be transmitted to server 502." See Ellis, paragraph [0089]. Ellis does not disclose transmitting the further contextual data and a corresponding label, nor analyzing both the further contextual data and the corresponding label using a model stored on the server, nor providing information based on an analysis of the further contextual data and the corresponding label. Examiner respectfully disagrees with Applicant’s arguments. Ellis does in fact disclose transmitting the further contextual data and a corresponding label, analyzing both the further contextual data and the corresponding label using a model stored on the server, and providing information based on an analysis of the further contextual data and the corresponding label. “A reference may be relied upon for all that it would have reasonably suggested to one having ordinary skill the art” (see MPEP § 2123(I)). Ellis recites that “[t]he application executing on mobile device 510 can extract audio features from the audio signal and compare the audio features to the classification models at 650 in accordance with 120 and 130 of process 100, determine if there is a match in accordance with 140 of process 100, and generate and output alerts in accordance with 150, 160, and 170 of process 100 and/or process 200. It should be noted that, upon generating an alert in response to a match between the audio features and one or more classification models, the alert and/or labeled audio features corresponding to the alert can be transmitted to server 502. In this embodiment, server 502 can use the labeled audio features to update and/or improve the one or more classification models. For example, the labeled audio features can be used to train one or more classification models. These updated classification models can be transmitted to the application executing on mobile device 510” (see [0084]). As detailed above, Ellis therefore teaches “transmitting the further contextual data and a corresponding label from the mobile computing device to a server; analyzing on the server the further contextual data and the corresponding label using a model stored on the server; and providing information for display on the mobile computing device based on the analysis of the further contextual data and the corresponding label,” as recited by independent claims 21 and 27. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Tsuboi (US Publication No. 2017/0061024) for “the client 2 transmits selection information (feedback information) indicating the selected one or more search context labels to the server 1” (see [0100]). THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DARA J GLASSER whose telephone number is (571)270-3666. The examiner can normally be reached Monday-Thursday, 10:00am-2:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Apu Mofiz can be reached at (571)272-4080. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. 01-28-2026 /DARA J GLASSER/Examiner, Art Unit 2161 /APU M MOFIZ/Supervisory Patent Examiner, Art Unit 2161
Read full office action

Prosecution Timeline

Sep 24, 2021
Application Filed
Sep 24, 2021
Response after Non-Final Action
Mar 07, 2023
Non-Final Rejection — §102, §103, §DP
Jul 14, 2023
Response after Non-Final Action
Jun 24, 2024
Non-Final Rejection — §102, §103, §DP
Nov 04, 2024
Response Filed
Jan 08, 2025
Final Rejection — §102, §103, §DP
Jul 02, 2025
Request for Continued Examination
Jul 08, 2025
Response after Non-Final Action
Aug 15, 2025
Non-Final Rejection — §102, §103, §DP
Dec 10, 2025
Response Filed
Jan 28, 2026
Final Rejection — §102, §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12572554
SYSTEMS, METHODS, AND COMPUTER READABLE MEDIA FOR DATA AUGMENTATION
2y 5m to grant Granted Mar 10, 2026
Patent 12468669
TECHNIQUES FOR BUILDING AND VALIDATING DATABASE SOFTWARE IN A SHARED MANAGEMENT ENVIRONMENT
2y 5m to grant Granted Nov 11, 2025
Patent 12443588
METHODS AND SYSTEMS FOR TRANSACTIONAL SCHEMA CHANGES
2y 5m to grant Granted Oct 14, 2025
Patent 12298993
METADATA SYNCHRONIZATION FOR CROSS SYSTEM DATA CURATION
2y 5m to grant Granted May 13, 2025
Patent 12271425
CONDENSING HIERARCHIES IN A GOVERNANCE SYSTEM BASED ON USAGE
2y 5m to grant Granted Apr 08, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

6-7
Expected OA Rounds
58%
Grant Probability
99%
With Interview (+53.9%)
3y 7m
Median Time to Grant
High
PTA Risk
Based on 163 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month