Prosecution Insights
Last updated: April 19, 2026
Application No. 18/164,414

CLOUD-BASED SEGREGATED VIDEO STORAGE AND RETRIEVAL FOR IMPROVED NETWORK SCALABILITY AND THROUGHPUT

Final Rejection §102§103
Filed
Feb 03, 2023
Examiner
MESSMORE, JONATHAN R
Art Unit
2482
Tech Center
2400 — Computer Networks
Assignee
Intellivision Technologies Corp.
OA Round
4 (Final)
76%
Grant Probability
Favorable
5-6
OA Rounds
2y 11m
To Grant
86%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
375 granted / 491 resolved
+18.4% vs TC avg
Moderate +9% lift
Without
With
+9.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
40 currently pending
Career history
531
Total Applications
across all art units

Statute-Specific Performance

§101
4.0%
-36.0% vs TC avg
§103
46.5%
+6.5% vs TC avg
§102
27.0%
-13.0% vs TC avg
§112
13.4%
-26.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 491 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments filed 17 February 2026 have been fully considered but they are not persuasive. Applicant argues the primary reference, Laska, has “no teaching or suggestion… of “storing a single stream of the recognized event,” which is obtained from a single camera, as claimed” (emphasis in original). Examiner respectfully disagrees and respectfully directs Applicant to Laska: ¶ [0094]: The controller device receives the video data from the one or more cameras 118, optionally, performs some preliminary processing on the video data, and sends the video data to the video server system 508 on behalf of the one or more cameras 118 substantially in real-time. Examiner respectfully submits the method of Laska may work with one camera or multiple cameras. Furthermore, Examiner respectfully directs Applicant’s attention to Laska: ¶ [0117]: Video storage database 514 storing raw video data associated with each of the video sources 522 (each including one or more cameras 118) of each reviewer account, as well as event categorization models (e.g., event clusters, categorization criteria, etc.), event categorization results (e.g., recognized event categories, and assignment of past motion events to the recognized event categories, representative events for each recognized event category, etc.), event masks for past motion events, video segments for each past motion event, preview video (e.g., sprites) of past motion events, and other relevant metadata (e.g., names of event categories, location of the cameras 118, creation time, duration, DTPZ settings of the cameras 118, etc.) associated with the motion events. Examiner submits the raw video data from a single camera stored by Laska discloses the required “storing a single stream of the recognized event in a storage of the server” as required by the claims. Applicant argues “as described in paragraphs 0243-0247 of Laska, motion events are categorized by use of a clustering technique to form "clusters based on density distributions of motion events" (see paragraph 0243 of Laska), and thus may incorporate motion vectors obtained from multiple cameras”. Examiner respectfully submits the processing of Laska may use multiple inputs via multiple cameras, but this does not teach away from accessing an image frame from a single camera nor storing a single stream in a server. Examiner submits the “single stream” of the claims is not defined in the claims and looking to the specification the definition appears broad to include any combination of data captured with reference to a particular time or event that is sent via a single channel. There appears to be no requirement that the single stream of the recognized event is only comprised of (or only generated from) the image frame from the single camera, as Applicant appears to argue. Claim Rejections - 35 USC § 102 The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. Claim(s) 21-22, 24-37, and 39-40 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Laska et al. (US 2016/0005281 A1). Regarding claims 21, 36, and 40, Laska discloses a method performed via CRM on a server comprising: a processor [Laska: ¶ [0008]]; and a memory storing instructions [Laska: ¶ [0008]] that, when executed by the processor, configure the server to perform operations [Laska: ¶ [0092]: FIG. 5 illustrates a representative operating environment 500 in which a video server system 508 provides data processing for monitoring and facilitating review of motion events in video streams captured by video cameras 118] comprising: accessing an event-detected image frame from a single camera [Laska: ¶ [0094]: The controller device receives the video data from the one or more cameras 118, optionally, performs some preliminary processing on the video data, and sends the video data to the video server system 508 on behalf of the one or more cameras 118 substantially in real-time. In some implementations, each camera has its own on-board processing capabilities to perform some preliminary processing on the captured video data before sending the processed video data (along with metadata obtained through the preliminary processing) to the controller device and/or the video server system (emphasis added)]; determining a zone-pixel value from a zone of the event-detected image frame [Laska: ¶ [0015]: The method includes: storing a respective event mask for each of the plurality of motion events identified in the video recording, the respective event mask including an aggregate of motion pixels associated with the at least one object in motion over multiple frames of the motion event; and receiving a definition of a zone of interest within the scene depicted in the video recording. In response to receiving the definition of the zone of interest, the method includes: determining, for each of the plurality of motion events, whether the respective event mask of the motion event overlaps with the zone of interest by at least a predetermined overlap factor; and identifying one or more events of interest from the plurality of motion events, where the respective event mask of each of the identified events of interest is determined to overlap with the zone of interest by at least the predetermined overlap factor; ¶ [0016]: In accordance with some implementations, a method of monitoring selected zones in a scene depicted in a video stream is performed at a server (e.g., the video server system 508, FIGS. 5-6) having one or more processors and memory; and ¶ [0244]: each time a new motion vector comes in to be categorized, the event categorizer places the new motion vector into the vector event space according to its value. If the new motion vector is sufficiently close to or falls within an existing dense cluster, the event category associated with the dense cluster is assigned to the new motion vector. If the new motion vector is not sufficiently close to any existing cluster, the new motion vector forms its own cluster of one member, and is assigned to the category of unrecognized events. If the new motion vector is sufficiently close to or falls within an existing sparse cluster, the cluster is updated with the addition of the new motion vector. If the updated cluster is now a dense cluster, the updated cluster is promoted, and all motion vectors (including the new motion vector) in the updated cluster are assigned to a new event category created for the updated cluster]; detecting a threshold-grade event by referencing the zone-pixel value against a pre- defined reference table [Laska: ¶ [0096]: The video storage database 514 stores raw video data received from the video sources 522, as well as various types of metadata, such as motion events, event categories, event category models, event filters, and event masks, for use in data processing for event monitoring and review for each reviewer account] of event-recognized pixel values [Laska: ¶ [0236]:the motion masks corresponding to each motion object detected in the video segment are aggregated across all frames of the video segment to create an event mask for the motion event involving the motion object. As shown in FIG. 11C-(b), in the event mask, all pixel locations containing less than a threshold number of motion pixels (e.g., one motion pixel) are masked and shown in black, while all pixel locations containing at least the threshold number of motion pixels are shown in white; and ¶ [0360]: a scene change detector associated with the application resets the local, software zoom when the total pixel color difference between a frame from the second video feed and a previous frame from the first video feed exceeds a predefined threshold]; categorizing the threshold-grade event into a recognized event [Laska: ¶ [0010]: . The method includes detecting a motion event and determining one or more characteristics for the motion event. In accordance with a determination that the one or more determined characteristics for the motion event satisfy one or more criteria for a respective event category, the method includes: assigning the motion event to the respective category; and displaying an indicator for the detected motion event on the event timeline with a display characteristic corresponding to the respective category; and ¶ [0243]: In some implementations, categorization of motion events is through a density-based clustering technique (e.g., DBscan) that forms clusters based on density distributions of motion events (e.g., motion events as represented by their respective motion vectors) in a vector event space. Regions with sufficiently high densities of motion vectors are promoted as recognized event categories, and all motion vectors within each promoted region are deemed to belong to a respective recognized event category associated with that promoted region. In contrast, regions that are not sufficiently dense are not promoted or recognized as event categories]; storing a single stream of the recognized event in a storage of the server [Laska: ¶ [0117]: Video storage database 514 storing raw video data associated with each of the video sources 522 (each including one or more cameras 118) of each reviewer account, as well as event categorization models (e.g., event clusters, categorization criteria, etc.), event categorization results (e.g., recognized event categories, and assignment of past motion events to the recognized event categories, representative events for each recognized event category, etc.), event masks for past motion events, video segments for each past motion event, preview video (e.g., sprites) of past motion events, and other relevant metadata (e.g., names of event categories, location of the cameras 118, creation time, duration, DTPZ settings of the cameras 118, etc.) associated with the motion events]; and overlaying contextual data comprising information of the recognized event on the single stream of the recognized event without requiring a second stream [Laska: ¶ [0164]-[0166]: FIG. 9E also illustrates client device 504 displaying a notification 928 for a newly detected respective motion event corresponding to event indicator 922L. For example, event category B is recognized prior to or concurrent with detecting the respective motion event. For example, as the respective motion event is detected and assigned to event category B, an event indicator 922L is displayed on the event timeline 910 with the display characteristic for event category B (e.g., the diagonal shading pattern). Continuing with this example, after or as the event indicator 922L is displayed on the event timeline 910, the notification 928 pops-up from the event indicator 922L. In FIG. 9E, the notification 928 notifies the user of the client device 504 that the motion event detected at 12:32:52 pm was assigned to event category B. In some implementations, the notification 928 is at least partially overlaid on the video feed displayed in the first region 903. In some implementations, the notification 928 pops-up from the event timeline 910 and is at least partially overlaid on the video feed displayed in the first region]. Regarding Claims 22 and 37, Laska discloses all the limitations of Claims 21 and 36, respectively, and is analyzed as previously discussed with respect to those claims. Furthermore, Laska discloses wherein the operations comprise: feeding the single stream from the server to a retrieval cloud-based server [Laska: ¶ [0072]: Through the one or more networks 162, the smart devices may communicate with a smart home provider server system 164 (also called a central server system and/or a cloud-computing system herein]. Regarding Claims 24 and 39, Laska discloses all the limitations of Claims 21 and 36, respectively, and is analyzed as previously discussed with respect to those claims. Furthermore, Laska discloses wherein detecting the threshold-grade event comprises: detecting the threshold-grade event from audio-video data of a real-world environment captured from the camera by an event management system applying event detection parameters [Laska: ¶ [0247]]. Regarding Claim 25, Laska discloses all the limitations of Claim 24, and is analyzed as previously discussed with respect to that claim. Furthermore, Laska discloses wherein the event management system comprises at least one of device management services, user management services, or alert management services [Laska: ¶ [0198]: the video server system 508 manages, operates, and controls access to the smart home environment 100. In some implementations, a respective client-side module 502 is associated with a user account registered with the video server system 508 that corresponds to a user of the client device 504]. Regarding Claim 26, Laska discloses all the limitations of Claim 21, and is analyzed as previously discussed with respect to that claim. Furthermore, Laska discloses wherein the event management system detects the threshold-grade event based on a combination of a pre-defined [Laska: ¶ [0360] and Claim 11], a user-defined [Laska: ¶ [0265]], and a learned threshold [Laska: ¶ [0049]] of any one of computed pixel values derived from a parameter [Laska: ¶ [0247]]. Regarding Claim 27, Laska discloses all the limitations of Claim 26, and is analyzed as previously discussed with respect to that claim. Furthermore, Laska discloses wherein the parameter is at least one of object detection [Laska: ¶ [0013]], scene change [Laska: ¶ [0227]], object left or removed, line crossing, movement [Laska: ¶ [0046]], count [Laska: ¶ [0247]], shape [Laska: ¶ [0264]], or sound change. Regarding Claim 28, Laska discloses all the limitations of Claim 21, and is analyzed as previously discussed with respect to that claim. Furthermore, Laska discloses wherein categorizing the threshold- grade event comprises: analyzing the threshold-grade event for categorization into the recognized event by an event management system applying event recognition parameters [Laska: ¶ [0260]]. Regarding Claim 29, Laska discloses all the limitations of Claim 28, and is analyzed as previously discussed with respect to that claim. Furthermore, Laska discloses wherein the event management system analyzes at least one of computed pixel values derived from at least one of a parameter from the threshold-grade event by referencing against at least one of a pre-defined [Laska: ¶ [0360]; and Claim 11], user-defined [Laska: ¶ [0265]], or learned reference table of recognized event-computed pixel values [Laska: ¶ [0049]]. Regarding Claim 30, Laska discloses all the limitations of Claim 28, and is analyzed as previously discussed with respect to that claim. Furthermore, Laska discloses wherein the recognized event is at least one of face recognition [Laska: ¶ [0264]], object recognition [Laska: ¶ [0013]], movement [Laska: ¶ [0046]], intrusion, location in designated areas, loitering, vehicle/license plate recognition, impact, and, or aberrant sound, wherein the event management system is configured to query an event recognized database [Laska: ¶ [0096]: The video storage database 514 stores raw video data received from the video sources 522, as well as various types of metadata, such as motion events, event categories, event category models, event filters, and event masks, for use in data processing for event monitoring and review for each reviewer account], and retrieve any one of a matched recognized event based on at least one of a pixel value, analysis of pixel value [Laska: ¶ [0234]], metadata [Laska: ¶ [0096]], or hash map from an event bucket in the event recognized database. Regarding Claim 31, Laska discloses all the limitations of Claim 28, and is analyzed as previously discussed with respect to that claim. Furthermore, Laska discloses wherein the event management system is configured to: run an analysis algorithm for movement tracking or movement extrapolating over an array of incoming image frames [Laska: ¶ [0015]]; query an event recognized database [Laska: ¶ [0096]]; and retrieve any one of a matched recognized event based on movement data from an event bucket in the event recognized database [Laska: ¶ [0234]]. Regarding Claim 32, Laska discloses all the limitations of Claim 28, and is analyzed as previously discussed with respect to that claim. Furthermore, Laska discloses wherein the event management system is configured to: determine content of the threshold-grade event based on determining at least one object identity by matching any one of at least scene change [Laska: ¶ [0227]], movement [Laska: ¶ [0046]], count [Laska: ¶ [0247]], shape [Laska: ¶ [0264]], sound, metadata, or hashmap characteristics of objects in an event bucket in an event recognized database. Regarding Claim 33, Laska discloses all the limitations of Claim 21, and is analyzed as previously discussed with respect to that claim. Furthermore, Laska discloses wherein the operations comprise: employing machine learning to update any one of a threshold of computed pixel values for event detection or update any one of a reference analysis of computed pixel values for event recognition [Laska: ¶ [0049]]. Regarding Claim 34, Laska discloses all the limitations of Claim 33, and is analyzed as previously discussed with respect to that claim. Furthermore, Laska discloses wherein the machine learning is at least one of a convolution neural network, associated model [Laska: ¶ [0216]], training data set [Laska: ¶ [0241]], feed-forward neural network, and, or back-propagated neural network. Regarding Claim 35, Laska discloses all the limitations of Claim 28, and is analyzed as previously discussed with respect to that claim. Furthermore, Laska discloses wherein the event management system is configured to perform at least one of an audio, video [Laska: ¶ [0013]], or image frame upload, save, retrieval, or playback in a staged-event driven manner (SEDA) [Laska: ¶ [0013]: the method includes initiating event recognition processing on a first video segment associated with the start of the first motion event candidate], wherein the at least one of the upload [Laska: ¶ [0219]], save [Laska: ¶ [0159], retrieval [Laska: ¶ [0082]], or playback [Laska: ¶ [0157]] is via a serial of audio-video-image segments over a time frame comprising a detected and, or recognized event [Laska: ¶ [0013]]. Claim(s) 23 and 38 is/are rejected under 35 U.S.C. 103 as being unpatentable over Laska as applied to claims 21 and 36 above, and further in view of Montminy et al. (US 2007/0126869 A1). Regarding Claims 23 and 38, Laska disclose(s) all the limitations of Claims 21 and 36, and is/are analyzed as previously discussed with respect to those claims. Laska may not explicitly disclose wherein the operations comprise: transmitting a status message indicating that the camera is operating improperly, in response to determining that the camera is operating improperly. However, Montminy discloses wherein the operations comprise: transmitting a status message indicating that the camera is operating improperly, in response to determining that the camera is operating improperly [Montminy: ¶ [0039]: A camera health measurement is computed based on that comparison. The camera monitor 103 detects a camera malfunction when the camera health measurement exceeds a malfunction threshold, either by exceeding a malfunction severity threshold or a malfunction persistence threshold. The camera health monitor can send a malfunction indication to the user interface 105 to advise the administrator and advantageously show relevant camera malfunction information, such as malfunction severity and/or malfunction persistence]. It would have been obvious to one having ordinary skill in the art before the effective filing date to combine the malfunction alert of Montminy with the processing of Laska in order to provide improved user experience with higher operational situational awareness. Claim(s) 26-27 is/are rejected under 35 U.S.C. 103 as being unpatentable over Laska as applied to claim 24 above, and further in view of Montminy et al. (US 2007/0126869 A1) and Lee et al. (US 2010/00208063 A1). Regarding Claim 26, Laska disclose(s) all the limitations of Claim 24, and is/are analyzed as previously discussed with respect to that claim. Furthermore, Laska discloses wherein the event management system detects the threshold-grade event based on a combination of a pre-defined [Laska: ¶ [0265]], a user-defined, and a learned threshold of any one of computed pixel values derived from a parameter. Laska may not explicitly disclose wherein the event management system detects the threshold-grade event based on a combination of a user-defined, and a learned threshold of any one of computed pixel values derived from a parameter. However, Montminy discloses wherein the event management system detects the threshold-grade event based on a combination of a learned threshold of any one of computed pixel values derived from a parameter [Montminy: ¶ [0038]: The camera monitor 103 can be used in a learning mode to determine whether a current camera health record should be stored as part of the stored camera health records, or whether a match count should be incremented for a particular stored camera health record]. Montminy may not explicitly disclose wherein the event management system detects the threshold-grade event based on a combination of a user-defined threshold of any one of computed pixel values derived from a parameter. However, Lee discloses wherein the event management system detects the threshold-grade event based on a combination of a user-defined threshold of any one of computed pixel values derived from a parameter [Lee: ¶ [0031]: a substantial amount of the work in video analytics has been focused on collecting motion data in user-specified "regions of interest" (ROIs). The collected motion data may then be compared to motion data for an input object using user-specified thresholds. In other words, the motion trajectory of the monitored object may be compared with motion patterns and distance threshold defined by user to detect these motion patterns] It would have been obvious to one having ordinary skill in the art before the effective filing date to combine the processes of Laska with the various methods of providing or determining thresholds for object detection of Montminy and Lee in order to improve object detection. Regarding Claim 27, Laska in view of Montminy, and Lee disclose(s) all the limitations of Claim 26, and is/are analyzed as previously discussed with respect to that claim. Furthermore, Laska in view of Montminy, and Lee discloses wherein the parameter is at least one of object detection, scene change, object left or removed, line crossing, movement, count, shape, or sound change [Lee: ¶ [0031]]. Claim(s) 31 is/are rejected under 35 U.S.C. 103 as being unpatentable over Laska as applied to claim 28 above, and further in view of Lee et al. (US 2010/0208063 A1). Regarding Claim 31, Laska disclose(s) all the limitations of Claim 28, and is/are analyzed as previously discussed with respect to that claim. Laska may not explicitly disclose wherein the event management system is configured to: run an analysis algorithm for movement tracking or movement extrapolating over an array of incoming image frames; query an event recognized database; and retrieve any one of a matched recognized event based on movement data from an event bucket in the event recognized database. However, Lee discloses wherein the event management system is configured to: run an analysis algorithm for movement tracking or movement extrapolating over an array of incoming image frames; query an event recognized database; and retrieve any one of a matched recognized event based on movement data from an event bucket in the event recognized database [Lee: ¶ [0031]]. It would have been obvious to one having ordinary skill in the art before the effective filing date to combine the motion tracking of Lee with the object tracking of Laska in order to improve accuracy. Claim(s) 32-34 is/are rejected under 35 U.S.C. 103 as being unpatentable over Laska as applied to claim 28 above, and further in view of Cobb et al. (US 2010/0208986 A1). Regarding Claim 32, Laska disclose(s) all the limitations of Claim 28, and is/are analyzed as previously discussed with respect to that claim. Laska may not explicitly disclose wherein the event management system is configured to: determine content of the threshold-grade event based on determining at least one object identity by matching any one of at least scene change, movement, count, shape, sound, metadata, or hashmap characteristics of objects in an event bucket in an event recognized database. However, Cobb discloses wherein the event management system is configured to: determine content of the threshold-grade event based on determining at least one object identity [Cobb: ¶ [0005]: Some currently available video surveillance systems provide simple object recognition capabilities. For example, some currently available systems are configured to identify and track objects moving within a sequence of video frame using a frame-by-frame analysis. These systems may be configured to isolate foreground elements of a scene from background elements of the scene (i.e., for identifying portions of a scene that depict activity (e.g., people, vehicles, etc.) and portions that depict fixed elements of the scene (e.g., a road surface or a subway platform). Thus, the scene background essentially provides a stage upon which activity occurs. Some video surveillance systems determine the difference between scene background by generating a model background image believed to provide the appropriate pixel color, grayscale, and/or intensity values for each pixel in an image of the scene. Further, in such systems, if a pixel value in a given frame differs significantly from the background model, then that pixel may be classified as depicting scene foreground. Contiguous regions of the scene (i.e., groups of adjacent pixels) that contain a portion of scene foreground (referred to as a foreground "blob") are identified, and a given "blob" may be matched from frame-to-frame as depicting the same object. That is, a "blob" may be tracked as it moves from frame-to-frame within the scene. Thus, once identified, a "blob" may be tracked from frame-to-frame in order to follow the movement of the "blob" over time, e.g., a person walking across the field of vision of a video surveillance camera] by matching any one of at least scene change, movement, count, shape, sound, metadata, or hashmap characteristics of objects in an event bucket in an event recognized database [Cobb: ¶ [0005]]. It would have been obvious to one having ordinary skill in the art before the effective filing date to combine the object identifying and tracking of Cobb with the processing of Laska in order to provide improved tracking. Regarding Claim 33, Laska disclose(s) all the limitations of Claim 21, and is/are analyzed as previously discussed with respect to that claim. Laska may not explicitly disclose wherein the operations comprise: employing machine learning to update any one of a threshold of computed pixel values for event detection or update any one of a reference analysis of computed pixel values for event recognition. However, Cobb discloses wherein the operations comprise: employing machine learning to update any one of a threshold of computed pixel values for event detection or update any one of a reference analysis of computed pixel values for event recognition [Cobb: ¶ [0021]: As the vehicle occludes more and more pixels, the computer vision engine may identify the "blob" of pixels as a depicting part of a common foreground object and attempt to track its position from frame to frame. For example, the position and kinematics of the foreground object determined from one frame (or frames) may be used to predict a future position of the foreground object in a subsequent frame. Further, a classifier may be configured to evaluate a variety of features derived from observing a foreground blob and classify it as being a particular thing, e.g., as actually depicting a person or a vehicle. Once so classified, a machine learning engine may observe the behavior of the vehicle and compare it with the observed behavior of other objects classified as being a vehicle]. Regarding Claim 34, Laska in view of Cobb disclose(s) all the limitations of Claim 33, and is/are analyzed as previously discussed with respect to that claim. Furthermore, Laska in view of Cobb discloses wherein the machine learning is at least one of a convolution neural network, associated model, training data set, feed-forward neural network, and, or back-propagated neural network [Laska: ¶ [0049]; ¶ [0064]; and Cobb: ¶ [0021]]. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JONATHAN R MESSMORE whose telephone number is (571)272-2773. The examiner can normally be reached Monday-Friday 9-5 EST/EDT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chris Kelley can be reached at 571-272-7331. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JONATHAN R MESSMORE/Primary Examiner, Art Unit 2482
Read full office action

Prosecution Timeline

Feb 03, 2023
Application Filed
Dec 11, 2024
Non-Final Rejection — §102, §103
Mar 17, 2025
Response Filed
Jun 25, 2025
Final Rejection — §102, §103
Aug 27, 2025
Response after Non-Final Action
Sep 29, 2025
Request for Continued Examination
Oct 05, 2025
Response after Non-Final Action
Nov 13, 2025
Non-Final Rejection — §102, §103
Feb 17, 2026
Response Filed
Mar 17, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598306
PARSING FRIENDLY AND ERROR RESILIENT MERGE FLAG CODING
2y 5m to grant Granted Apr 07, 2026
Patent 12587680
Attribute Layers And Signaling In Point Cloud Coding
2y 5m to grant Granted Mar 24, 2026
Patent 12581073
VIDEO ENCODING AND DECODING
2y 5m to grant Granted Mar 17, 2026
Patent 12556683
INTRA BLOCK COPY WITH TEMPLATE MATCHING FOR VIDEO ENCODING AND DECODING
2y 5m to grant Granted Feb 17, 2026
Patent 12556663
GAMING TABLE EVENTS DETECTING AND PROCESSING
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
76%
Grant Probability
86%
With Interview (+9.3%)
2y 11m
Median Time to Grant
High
PTA Risk
Based on 491 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month