Prosecution Insights
Last updated: April 19, 2026
Application No. 18/519,206

METHOD AND SYSTEM FOR TAMPERING DETERMINATION

Non-Final OA §101§103
Filed
Nov 27, 2023
Examiner
KONERU, SUJAY
Art Unit
3624
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Motorola Solutions Inc.
OA Round
3 (Non-Final)
58%
Grant Probability
Moderate
3-4
OA Rounds
3y 2m
To Grant
95%
With Interview

Examiner Intelligence

Grants 58% of resolved cases
58%
Career Allow Rate
421 granted / 722 resolved
+6.3% vs TC avg
Strong +37% interview lift
Without
With
+37.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
36 currently pending
Career history
758
Total Applications
across all art units

Statute-Specific Performance

§101
37.9%
-2.1% vs TC avg
§103
50.7%
+10.7% vs TC avg
§102
2.0%
-38.0% vs TC avg
§112
7.4%
-32.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 722 resolved cases

Office Action

§101 §103
DETAILED ACTION This Non-Final Office Action is in response to Applicant's amendments and arguments and request for continued examination filed on January 15, 2026. Applicant has canceled claims 9 and 19 and added claims 21-22. Currently, claims 1-8, 10-18, 20-22 are pending. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 1/15/26 has been entered. Response to Amendments The 35 U.S.C. 101 rejections of claims 1-9, 11-18, 20 are maintained in light of applicant’s arguments. The 35 U.S.C. 103 rejections of claims 1-9, 11-18, 20 are maintained in light of applicant’s arguments. Response to Arguments Applicant’s remarks submitted on 1/15/26 have been considered but are not persuasive. Applicant argues on p. 7 of the remarks that the 101 rejection is improper. Examiner disagrees. Applicant argues on p. 7 of the remarks that the claims improve the technological usefulness of body warm cameras and that this is practical application and makes comparisons to the GPS in example 4 of the subject matter eligibility examples. Applicant argues on p. 8 of the remarks that the claims are not use of a body camera to implement the abstract idea and notes that detection of video tampering in body camera is different than a fixed location because tampering, for instance, revealed by discontinuities in shadows and static objects. Examiner disagrees and notes that the claims fall squarely under video analysis. Where the video comes from, whether it be body worn cameras or fixed cameras, does not change the steps and logic in the video analysis. The body worn camera is being used for providing the video that is analyzed and because the claims are not actually improving the way the video is detected or the way the body camera operates or constructed, the body camera is not being improved but rather a tool for implementing the abstract idea itself. Therefore, the 101 rejections are maintained. Applicant argues on p. 10 of the remarks that the 103 rejections are improper. Examiner disagrees. Applicant argues on p. 10-11 of the remarks that the cited art does not show a body worn camera and that a smartphone camera cannot be considered a body worn camera because it does not include the proper housing. Examiner disagrees and notes applicant’s claims are directed to video analysis and not how the body camera is constructed or functions. Therefore, a mobile smart camera used by a person can be considered to show a body worn camera would be obvious to one of ordinary skill in the art as both a mobile cameras recording video from the perspective of a person who can be moving around. It is also well known that a mobile phone can be secured to a person. Therefore, video detected from a body warn camera is obvious in light of Siminoff. Applicant argues on p. 12 of the remarks that tampering with a body camera is different because, for instance, the cameras are frequently touched. Examiner notes these types of arguments or limitations do not appear in the claims and that the claims simply are about video analysis as opposed to how the video is captured. Applicant argues on p. 12 of the remarks that examiner withdraw statements related concerning para [0012]-[0015, [0019]-[0024], [0053]-[0060]. Examiner notes such sections are used in the 101 rejection for evidentiary support in the well-understood, routine and conventional context and have no bearing on the 103 rejection and thus will not be corrected or withdrawn. Applicant argues on p. 12-13 of the remarks that the cited art does not teach a plurality of tamper determination factors but rather just one. Examiner disagrees. Siminoff shows at col 35, line 4-20, "In some embodiments, analysis of collected audio data can include detecting a target frequency or frequency range of the audio data, determining a duration associated with a target frequency or frequency range, and determining that the duration exceeds a predetermined threshold. In one embodiment, determining an occurrence of tampering includes waveform analysis of collected audio data and/or deriving frequency components of the audio data using a fast Fourier transform. In some embodiments, user input is received by the server regarding tampering associated with collected audio data. Training of the computer model as described herein may be, for example, based at least in part on this user input. In one example, user input is provided to distinguish normal user operation and/or maintenance from unauthorized tampering events." Because the tampering can be associated with "determining an occurrence of tampering includes waveform analysis of collected audio data and/or deriving frequency components of the audio data using a fast Fourier transform", this can be considered multiple tamper determination factors given broadest reasonable interpretation. Therefore, the 103 rejections are maintained. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-8, 11-18 are rejected under 35 U.S.C. 103 as being unpatentable over Siminoff et al. (US 11,217,076 B1) (hereinafter Siminoff) in view of Lee et al. (US 2018/0069838 A1) (hereinafter Lee). Claims 1 and 11: Siminoff, as shown, discloses the following limitations of claim 1: A computer-implemented method (and corresponding system – Figs 1, 2 showing equivalent computing functionality and structure) comprising: analyzing video captured by a body worn camera (col 2, line 63-66, "One form of suspicious activity is tampering by one or more persons with a security and/or monitoring device (e.g., a camera) that is intended to protect, for example, a home or business location." where it is obvious to one of ordinary skill in the art that such a camera could be a body worn camera which is another type of security device and col 5, line 65 to col 6, line 7, " The user's client device 300 may comprise, for example, a mobile telephone (which may also be referred to as a cellular telephone), such as a smartphone, a personal digital assistant (PDA), or another communication device. The user's client device 300 comprises a display (not shown) and related components capable of displaying streaming and/or recorded video images. The user's client device 300 may also comprise a speaker and related components capable of broadcasting streaming and/or recorded audio, and may also comprise a microphone" where a mobile smartphone can be considered a camera on a person's body and col 49, line 22-62, showing the recording device can be a smartphone. Examiner further notes that video footage analysis techniques would be the same regardless if the video is captured from a static camera or a body camera), and the analyzing including employing at least one processor to make an initial determination that, for a time period in-between the times t and t + delta t, a video irregularity comprising a video gap or a video obscuration exists in the video (col 3, line 40-50, "In various embodiments, the determination of an occurrence of tampering may also be based on, for example, processing of the video data (e.g., by analyzing one or more frames of the video data as described below). In one example, the video data is collected by a camera of an A/V recording and communication device. The video data can indicate, for example, occlusion of images captured by a camera (e.g., caused by a blocking or an obscuring of a lens of the camera) that is associated with the tampering of, for example, the camera by a perpetrator." where based on one or more frames shows a video analysis between the first and last frame); generating, using the at least one processor, a tampering score for the video irregularity, wherein the tampering score is generated based on inputting values of variables that correspond to a plurality of tamper determination factors into a formula, and the tamper determination factors include: one or more causes of the video irregularity (col 35, line 4-20, "In some embodiments, analysis of collected audio data can include detecting a target frequency or frequency range of the audio data, determining a duration associated with a target frequency or frequency range, and determining that the duration exceeds a predetermined threshold. In one embodiment, determining an occurrence of tampering includes waveform analysis of collected audio data and/or deriving frequency components of the audio data using a fast Fourier transform. In some embodiments, user input is received by the server regarding tampering associated with collected audio data. Training of the computer model as described herein may be, for example, based at least in part on this user input. In one example, user input is provided to distinguish normal user operation and/or maintenance from unauthorized tampering events."); and one or more of at least one first factor directly perceivable from the video and at least one second factor not directly perceivable from the video (col 34, line 63 to col 35, line 19, where determining that lens is blocked or obscured is perceivable from the video and the detection based on audio data of target frequency or frequency range exceeding a threshold is not directly perceivable from the video); and when the tampering score satisfies a threshold condition corresponding to deliberate tampering, storing a flag in non-volatile storage that indicates that deliberate action of a person caused the video irregularity (col 32, line 39-53, " In various additional embodiments, additionally or alternatively to the above embodiments, one or more other actions may be performed after tampering is detected (e.g., based on a determination that audio and/or video data contains data indicative of tampering), as described below. For example, the methods 600 and/or 601 may further include, at block 626, using machine learning technology (e.g., a computer model such as a trained neural network) to detect whether a person identified in video data collected by a camera (e.g., a camera of A/V recording and communication device 200) is behaving suspiciously. This determination may be, for example, used to alert the homeowner or user. This machine learning technology may be used in addition to, or alternatively to, the histogram and/or edge analysis." and col 45, line 30-47, "Other examples of conditions that may be considered and/or for which data may be collected in a server making a determination of an occurrence of tampering include the following: a position of a camera itself is changed, a person approaches within a predetermined distance of a camera and looks in the camera at nighttime, a person puts a finger on a camera, a person approaches and places an object on a camera, a camera is blinded with a bright light source such as a torch or laser, a person approaches a camera and places a big object near the camera where the time duration of this activity exceeds a predetermined value, the video data from a camera is almost entirely gray/white/black/noisy video (e.g., to an extent that is above a predetermined threshold), the video data is blurry, a portion of the video frame is not informative (e.g., a predetermined percentage of portion of the frame is covered or blinded), and/or a signal from a camera is completely lost." and col 14, line 5-18, "The A/V recording and communication device 200 may also include configuration setting(s) 256. In some embodiments, the configuration setting(s) 256 represent the “state” of the A/V recording and communication device 200. For example, the A/V recording and communication device 200 may be placed into an “armed” mode when its owner is away from home. A configuration file, flag, or the like may be modified, which might affect some aspects of the A/V recording and communication device's 200 operation. For instance, an A/V recording and communication device 200 in “armed” mode may produce a siren sound in response to a triggering event, which would not otherwise occur if the A/V recording and communication device 200 was not in the “armed” mode." and col 3, line 57 to col 4, line 7, showing use of database to store the video and tampering data and video data analysis algorithms where a flag to represent data is obvious in databases). Siminoff, however, does not specifically disclose the video including an earliest-in-time video frame and a last-in time video frame that are captured at times t and t + delta t respectively. In analogous art, Lee discloses the following limitations: the video including an earliest-in-time video frame and a last-in time video frame that are captured at times t and t + delta t respectively (see par a[0067], where the scene bracketed by two timestamps that show when the scene starts and ends shows such video frame data that is captured), It would have been obvious to one or ordinary skill in the art at the time of the invention to combine the teachings of Lee with Siminoff because more specific data for the time frames enables more specific analysis to assist in forensic purposes (see Lee, para [0003]-[0005]). Moreover, it would have been obvious to one of ordinary skill in the art at the time of the invention to include the method for security for scene-based sensor networks as taught by Lee in the camera tampering detections based on audio and video of Siminoff, since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Claims 2 and 12: Further, Siminoff discloses the following limitations: wherein the tamper determination factors include at least two or more of the following: whether no video footage was being captured during the time period; whether no video footage being captured during the time period was due to the body worn camera being turned off; an extent to which video footage being captured during the time period was fully or partly obscured; an amount of motion of the video footage being captured during the time period; content alignment of the video footage being captured during the time period to another media source; and extent to which the time period is matching in time to a partly or fully missed material event (col 45, line 31-47, "Other examples of conditions that may be considered and/or for which data may be collected in a server making a determination of an occurrence of tampering include the following: a position of a camera itself is changed, a person approaches within a predetermined distance of a camera and looks in the camera at nighttime, a person puts a finger on a camera, a person approaches and places an object on a camera, a camera is blinded with a bright light source such as a torch or laser, a person approaches a camera and places a big object near the camera where the time duration of this activity exceeds a predetermined value, the video data from a camera is almost entirely gray/white/black/noisy video (e.g., to an extent that is above a predetermined threshold), the video data is blurry, a portion of the video frame is not informative (e.g., a predetermined percentage of portion of the frame is covered or blinded), and/or a signal from a camera is completely lost.") Claims 3-4, 13-14: Siminoff does not specifically disclose providing controlled access to the video stored as video footage via a digital record management system. In analogous art, Lee discloses the following limitations: providing controlled access to the video stored as video footage via a digital record management system (see para [0083], "When data is secured, this supports the definition of privileges as to which entities can perform what activities with which data. Security can be used to limit who can access data, when to access data, who can further distribute the data, to whom the data can be distributed, who can perform analysis of the data, and what types of analysis may be performed, for example. As shown above, security and privileges can be set differently for different data and for different fields within data. They can also be set differently for different entities, applications and services." and see para [0083], "When data is secured, this supports the definition of privileges as to which entities can perform what activities with which data. Security can be used to limit who can access data, when to access data, who can further distribute the data, to whom the data can be distributed, who can perform analysis of the data, and what types of analysis may be performed, for example. As shown above, security and privileges can be set differently for different data and for different fields within data. They can also be set differently for different entities, applications and services." and see para [0093], "The privacy management system 800 includes a sensor map 802, a user list 804, a credentials engine 806 and a privileges manager 808. The sensor map 802 maintains information about the available sensor devices. The user list 804 maintains information about the users serviced by the privacy management system. The credentials engine 806 authenticates users as they access the system. The privileges manager 808 determines which users have which privileges with respect to which data.") generating an analytics report that identifies deliberate tampering instances in relation to the body worn camera, and the generating the analytics report including extracting the stored flag via the digital record management system (see para [0036], "For certain applications, such as when the automatic processing of video streams may lead to actions being taken (for example raising an alert if an unauthorized person has entered an area, an unauthorized object is detected, etc.), the reliability and integrity of the video stream from the camera to AI processing in the cloud is important. The encryption and authentication of the video and other sensor data becomes an important mechanism to ensure that the video stream has not been tampered with. To enable an entity that is processing the video, to detect that the video has been tampered with, time stamps or counters can be inserted into the stream, typically as part of the video encoding process. The detection of missing time stamps or counters enables the receiving party to detect that the video has been tampered with. The time stamps or counters may be protected from tampering by either being part of the encrypted video payload and or being included in a hash function that is contained in the encrypted payload or is carried separately and is included in a signature mechanism that enables the receiving party to verify that the hash result is obtained from a valid source. By checking that the counters or time stamps are present in the decrypted stream, the receiver can verify that parts of the video sequence have not been removed or replaced." and see para [0040], "An additional advantage from a security perspective is that the user can determine how much data or images may be made available to a third party. For example SceneData may show people within the view of the camera interacting and the audio may capture what is being said between the parties. The AI systems may extract the identities of the two persons in the camera view. With the concept of SceneData, the user may allow the identities of the two persons to be accessed but may deny access to the actual video and audio content. SceneData and appropriate security can allow other systems to have intermediate access or access due to the result of a specific event. The user may also configure the system to enable access to be granted to SceneData in the event of a specific event or detected feature within the video. For example, in case of a specific face being detected, a notification may be sent to a third party (for example the police) and access may be granted to the video feed. In such case, a field may be added to scene data indicating that it was accessed by a third party, including the conditions or reasons as to why it was accessed. This record of access may be also be stored in some other log file, which may or may not include a signature."). It would have been obvious to one of ordinary skill in the art at the time of the invention to include the method for security for scene-based sensor networks as taught by Lee in the camera tampering detections based on audio and video of Siminoff, since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Claims 5-6, 16-17: Further, Siminoff discloses the following limitations: initiating a query to identify other cameras that captured respective other video within a defined same geographic area as the body worn camera during the time period (col 30, line 1-10, "In an embodiment, in response to determining an occurrence of tampering, an alert can be generated and transmitted to computing devices of others in a neighborhood (e.g., a geographic region within a predetermined physical radius, size, or other dimension from a location of a tampering detection and/or from a location of a user's A/V recording and communication device). In one example, these computing devices of others can include A/V recording and communication devices of other homeowners and/or businesses in the neighborhood."); and receiving matching data corresponding to the query from a server system (col 3, line 27-40, "the database can include audio data corresponding to and/or collected from prior instances of known tampering (e.g., audio data collected from A/V recording and communication devices for which tampering is known to have previously occurred can be stored in a database, and an audio data analysis algorithm can be created to match this prior known tampering). In various cases, processing of the audio data can be, for example, performed on an A/V recording and communication device that collects the audio data and/or performed on other computing devices (e.g., a server in communication with the A/V recording and communication device over a wired and/or wireless network).") wherein the tamper determination factors include the second factor not directly perceivable from the video which is at least a portion of the other video from an identified one of the other cameras (col 38, line 36-41, "In another embodiment, the audio data (e.g., the adjusted audio data features determined at block 808) can be matched against a learned database of in-the-field devices (e.g., a database or computer model that has been trained or otherwise built using data previously collected from one or more A/V recording and communication devices)."). Claims 7-8, 17-18: Further, Siminoff discloses the following limitations: wherein the other cameras include at least one of an in-vehicle camera, a drone camera and a fixed-location security camera (col 49, line 40-62, "With further reference to FIG. 12, the system 1200 may also include one or more A/V recording and communication devices 180 (e.g., installed at the same property where the alarm devices 195 and smart home devices 190 are installed). The A/V recording and communication devices 180 may include, but are not limited to, video doorbells 181, lighting systems with A/V recording and communication capabilities (e.g., floodlight cameras 182, spotlight cameras (not shown), etc.), security cameras 183, or any other similar devices. The structure and functionality of exemplary A/V recording and communication devices 180 can be, for example, similar to that as illustrated and described above with reference to FIGS. 5A-5C. As described above, in some embodiments, the user may control the A/V recording and communication devices 180 using either or both of the client devices 1210, 1220. Additionally, in some embodiments, the user may control the A/V recording and communication devices 180 through the hub device 115 (e.g., using either or both of the client devices 1210, 1220). In some embodiments, however, the client devices 1210, 1220 may not be associated with an A/V recording and communication device.") wherein the other cameras include at least one other body worn camera (col 49, line 34-40, "The client devices 1210, 1220 may comprise, for example, a mobile phone such as a smartphone, or a computing device such as a tablet computer, a laptop computer, a desktop computer, etc. The client devices 1210, 1220 may include any or all of the components and/or functionality of the client device 300 described above with reference to FIG. 3." where it would be obvious to one of ordinary skill in the art that a body cam could be substituted for a mobile smart phone camera.) Claims 10 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Siminoff and Lee, as applied above, and further in view of Lee et al. (US 2020/0042797 A1) (hereinafter Lee 2) Claims 10 and 20: Siminoff and Lee do not specifically disclose identifying, via a sensor, that a wearer of the body worn camera has drawn a weapon during the time period. In analogous art, Lee 2 discloses the following limitations: identifying, via a sensor, that a wearer of the body worn camera has drawn a weapon during the time period (see para [0051], "The electronic computing device analyzes other data feeds received from network-connectable devices 105 that are associated with the same incident as a received video feed. For example, the electronic computing device receives a video feed from a body camera of an officer and also receives one or more biometric sensor data feeds from one or more biometric sensors associated with the officer. The monitored biometric sensor data received may be context information used by the electronic computing device in some situations as explained in detail below. Other sensor data may additionally or alternatively be context information associated with the video (for example, location data received from a network-connectable device 105 such as a portable radio of an officer, a vehicle location system, and an indoor location system of a building; data received from a pedometer and/or accelerometer that indicates how fast an officer is moving or that indicates whether an officer is walking or running; data received from a sensor-enabled holster to detect when a weapon has been removed from the holster; data received from a sensor that detects when the weapon has been discharged; and the like)." and see para [0038]), and wherein the tamper determination factors include the second factor not directly perceivable from the video which is the wearer drawing the weapon during the time period (see para [0051]-[0052], showing the sensor data can be included in the analysis of the feed to determine context information for the video) It would have been obvious to one or ordinary skill in the art at the time of the invention to combine the teachings of Lee 2 with Siminoff and Lee because including weapon information provides valuable information to users to make decisions (see Lee 2, para [0001]). Moreover, it would have been obvious to one of ordinary skill in the art at the time of the invention to include the method for differentiating one or more objects in a video as taught by Lee 2 in the Siminoff and Lee combination, since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Allowable Subject Matter Claims 21-22 would be allowable if rewritten to overcome the rejection(s) under 35 U.S.C. 101, set forth in this Office action and to include all of the limitations of the base claim and any intervening claims. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Choudury (US 2012/0019640 A1), system for a camera that reasonably guarantees that videos or photos taken by it and presented are authentic videos or photos of the real world and that they were taken at a particular date and time, and therefore guarantees that those videos or photos are not tampered renditions of other genuine videos or photos, and that they have not been artificially generated either, and that they were taken at a particular date and time Any inquiry concerning this communication or earlier communications from the examiner should be directed to SUJAY KONERU whose telephone number is (571)270-3409. The examiner can normally be reached M-F, 8:30 AM to 5 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Patricia Munson can be reached on 571- 270-5396. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SUJAY KONERU/ Primary Examiner, Art Unit 3624
Read full office action

Prosecution Timeline

Nov 27, 2023
Application Filed
May 21, 2025
Non-Final Rejection — §101, §103
Aug 20, 2025
Response Filed
Aug 25, 2025
Final Rejection — §101, §103
Jan 15, 2026
Request for Continued Examination
Feb 17, 2026
Response after Non-Final Action
Mar 09, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596979
PERSONALIZED RISK AND REWARD CRITERIA FOR WORKFORCE MANAGEMENT
2y 5m to grant Granted Apr 07, 2026
Patent 12596972
CONVERSATION-BASED MESSAGING METHOD AND SYSTEM
2y 5m to grant Granted Apr 07, 2026
Patent 12585868
SYSTEM TO TRACE CHANGES IN A CONFIGURATION OF A SERVICE ORDER CODE FOR SERVICE FEATURES OF A TELECOMMUNICATIONS NETWORK
2y 5m to grant Granted Mar 24, 2026
Patent 12579553
REUSABLE DATA SCIENCE MODEL ARCHITECTURES FOR RETAIL MERCHANDISING
2y 5m to grant Granted Mar 17, 2026
Patent 12572990
METHODS AND IoT SYSTEMS FOR MONITORING WELDING OF SMART GAS PIPELINE BASED ON GOVERNMENT SUPERVISION
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
58%
Grant Probability
95%
With Interview (+37.0%)
3y 2m
Median Time to Grant
High
PTA Risk
Based on 722 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month