Prosecution Insights
Last updated: April 19, 2026
Application No. 18/276,436

POSITION DETECTION SYSTEM

Final Rejection §103
Filed
Mar 22, 2024
Examiner
NIRJHAR, NASIM NAZRUL
Art Unit
2896
Tech Center
2800 — Semiconductors & Electrical Systems
Assignee
Asterisk Inc.
OA Round
2 (Final)
74%
Grant Probability
Favorable
3-4
OA Rounds
2y 6m
To Grant
93%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
379 granted / 512 resolved
+6.0% vs TC avg
Strong +19% interview lift
Without
With
+18.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
37 currently pending
Career history
549
Total Applications
across all art units

Statute-Specific Performance

§101
3.8%
-36.2% vs TC avg
§103
75.4%
+35.4% vs TC avg
§102
3.4%
-36.6% vs TC avg
§112
7.1%
-32.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 512 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This communication is responsive to the correspondence filled on 12/4/25. Claims 1-7 are presented for examination. Response to Arguments Applicant's arguments filed 12/4/25 with respect to claims 1-7 have been considered but are moot in view of the new ground(s) of rejection. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3 and 5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Mirza (U.S. Pub. No. 20210125352 A1). Regarding to claim 1: 1. Mirza teach a position detection system comprising: a plurality of cameras (Mirza [0075] Examples of sensors 108 include, but are not limited to, cameras, video cameras, web cameras, printed circuit board (PCB) cameras, depth sensing cameras, time-of-flight cameras, LiDARs, structured light cameras, or any other suitable type of imaging device.) each configured to capture an image corresponding to a respective field of view area; (Mirza [0077] Each sensor 108 has a limited field of view within the space 102. This means that each sensor 108 may only be able to capture a portion of the space 102 within their field of view. To provide complete coverage of the space 102, the tracking system 100 may use multiple sensors 108 configured as a sensor array. In FIG. 1, the sensors 108 are configured as a three by four sensor array. In other examples, a sensor array may comprise any other suitable number and/or configuration of sensors 108. In one embodiment, the sensor array is positioned parallel with the floor of the space 102. In some embodiments, the sensor array is configured such that adjacent sensors 108 have at least partially overlapping fields of view.) at least one processor (Mirza [0399] The one or more processors are configured to implement various instructions. For example, the one or more processors are configured to execute instructions to implement a tracking engine 3808. In this way, processor 3802 may be a special purpose computer designed to implement the functions disclosed herein. In an embodiment, the tracking engine 3808 is implemented using logic units, FPGAs, ASICs, DSPs, or any other suitable hardware. The tracking engine 3808 is configured operate as described in FIGS. 1-18. For example, the tracking engine 3808 may be configured to perform the steps of methods 200, 600, 800, 1000, 1200, 1500, 1600, and 1700 as described in FIGS. 2, 6, 8, 10, 12, 15, 16, and 17, respectively.) configured to detect a position of an object in images generated by the plurality of cameras and configured to specify information regarding the position of the object in a single area (Mirza [0151] FIG. 10 is a flowchart of an embodiment of a tracking hand off method 1000 for the tracking system 100. The tracking system 100 may employ method 1000 to hand off [specify information regarding the position] tracking information for an object (e.g. a person) as it moves between the fields of view of adjacent sensors 108. For example, the tracking system 100 may track the position of people (e.g. shoppers) as they move around within the interior of the space 102. Each sensor 108 has a limited field of view which means that each sensor 108 can only track the position of a person within a portion of the space 102. The tracking system 100 employs a plurality of sensors 108 to track the movement of a person within the entire space 102. Each sensor 108 operates independent from one another which means that the tracking system 100 keeps track [specify information regarding the position through hand off] of a person as they move from the field of view of one sensor 108 into the field of view of an adjacent sensor 108.) where the respective field of view areas of the cameras are integrated, based on the position of the detected object, (Mirza [0063] This means that information from each camera needs to be processed independently to identify and track people and objects within the field of view of a particular camera. The information from each camera then needs to be combined and processed as a collective in order to track people and objects within the physical space.) each of the respective field of view areas being a partial area of the single area; (Mirza [0151] FIG. 10 is a flowchart of an embodiment of a tracking hand off method 1000 for the tracking system 100. The tracking system 100 may employ method 1000 to hand off [specify information regarding the position] tracking information for an object (e.g. a person) as it moves between the fields of view of adjacent sensors 108. For example, the tracking system 100 may track the position of people (e.g. shoppers) as they move around within the interior of the space 102. Each sensor 108 has a limited field of view which means that each sensor 108 can only track the position of a person within a portion of the space 102. The tracking system 100 employs a plurality of sensors 108 to track the movement of a person within the entire space 102. Each sensor 108 operates independent from one another which means that the tracking system 100 keeps track [specify information regarding the position through hand off] of a person as they move from the field of view of one sensor 108 into the field of view of an adjacent sensor 108.) and a computer configured (Mirza [0399] The one or more processors are configured to implement various instructions. For example, the one or more processors are configured to execute instructions to implement a tracking engine 3808. In this way, processor 3802 may be a special purpose computer designed to implement the functions disclosed herein. In an embodiment, the tracking engine 3808 is implemented using logic units, FPGAs, ASICs, DSPs, or any other suitable hardware. The tracking engine 3808 is configured operate as described in FIGS. 1-18. For example, the tracking engine 3808 may be configured to perform the steps of methods 200, 600, 800, 1000, 1200, 1500, 1600, and 1700 as described in FIGS. 2, 6, 8, 10, 12, 15, 16, and 17, respectively.) to transmit the specified information on the object to a host system. (Mirza 0083] A server 106 may be formed by one or more physical devices configured to provide services and resources (e.g. data and/or hardware resources) for the tracking system 100. Additional information about the hardware configuration of a server 106 is described in FIG. 38. In one embodiment, a server 106 may be operably coupled to one or more sensors 108 and/or weight sensors 110. The tracking system 100 may comprise any suitable number of servers 106. For example, the tracking system 100 may comprise a first server 106 that is in signal communication with a first plurality of sensors 108 in a sensor array and a second server 106 that is in signal communication with a second plurality of sensors 108 in the sensor array. As another example, the tracking system 100 may comprise a first server 106 that is in signal communication with a plurality of sensors 108 and a second server 106 that is in signal communication with a plurality of weight sensors 110. In other examples, the tracking system 100 may comprise any other suitable number of servers 106 that are each in signal communication with one or more sensors 108 and/or weight sensors 110. [0073] In one embodiment, the tracking system 100 comprises one or more clients 105, one or more servers 106, one or more scanners 115, one or more sensors 108, and one or more weight sensors 110. The one or more clients 105, one or more servers 106, one or more scanners 115, one or more sensors 108, and one or more weight sensors 110 may be in signal communication with each other over a network 107. The network 107 may be any suitable type of wireless and/or wired network including, but not limited to, all or a portion of the Internet, an Intranet, a Bluetooth network, a WIFI network) Mirza shows transmit the information to a host system. However, it does not explicitly shows transmit the specified information on the object to a host system. So, 102 rejection is not placed. However, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art that Mirza teach transmit the specified information on the object because Mirza [0151] FIG. 10. This transmission is obviously to the host system because cameras 108 are networked (107) through the host server 106 as shown in Fig. 1 and as per para [0073]. Regarding to claim 2: 2. Mirza teach the position detection system according to claim 1, wherein the at least one processor is configured to learn a first image of the object that is training data, (Mirza [0018] the tracking system is configured to employ a specially structured approach for object re-identification when the identity of a tracked person becomes uncertain or unknown (e.g., based on the candidate lists described above). For example, rather than relying heavily on resource-expensive machine learning-based approaches to re-identify people, “lower-cost” descriptors related to observable characteristics (e.g., height, color, width, volume, etc.) of people are used first for person re-identification.) and recognize an object in the image (Mirza [0329] if these descriptors are not sufficient for reliably re-identifying the person (e.g., because other people being tracked have similar characteristics), progressively higher-level approaches may be used (e.g., involving artificial neural networks that are trained to recognize people) which may be more effective at person identification but which generally involve the use of more processing resources.) that is input data acquired from the plurality of cameras to specify a position of the recognized object. (Mirza [0151] FIG. 10 is a flowchart of an embodiment of a tracking hand off method 1000 for the tracking system 100. The tracking system 100 may employ method 1000 to hand off [specify information regarding the position] tracking information for an object (e.g. a person) as it moves between the fields of view of adjacent sensors 108. For example, the tracking system 100 may track the position of people (e.g. shoppers) as they move around within the interior of the space 102. Each sensor 108 has a limited field of view which means that each sensor 108 can only track the position of a person within a portion of the space 102. The tracking system 100 employs a plurality of sensors 108 to track the movement of a person within the entire space 102. Each sensor 108 operates independent from one another which means that the tracking system 100 keeps track [specify information regarding the position through hand off] of a person as they move from the field of view of one sensor 108 into the field of view of an adjacent sensor 108.) Regarding to claim 3 and 5: 3. Mirza teach the position detection system according to claim 1, wherein each of the at least one processor is an image processor. (Mirza [0091] a client 105 may be configured to provide image processing capabilities for images or frames 302 that are captured by a sensor 108.) Claims 4 and 6-7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Mirza (U.S. Pub. No. 20210125352 A1), in view of Chakravarty (U.S. Pub. No. 20200097724 A1). Regarding to claim 4 and 6-7: 4. Mirza teach the position detection system according to claim 1, Mirza do not explicitly teach wherein at least one of the plurality of cameras and the at least one processor are disposed in a single unit. However Chakravarty teach wherein at least one of the plurality of cameras and the at least one processor are disposed in a single unit. (Chakravarty Fig. 4 camera 304 and image processor 306 are disposed in AI module 400. [0054] Specifically, the AI module 400 includes the image-acquisition module 304 (FIG. 3), an image-preprocessing module 306 (FIG. 3). [0034] Mobile embodiments of the sensor module include, but are not limited to, a smartphone, tablet computer, wearable computing device, or any other portable computing device configured with one or more processors, an RGB camera, wireless communication capabilities, an optional depth sensor, an optional light source, and software for performing the image processing, object detecting, tracking, and recognizing, self-improving machine learning, and optional light guidance functions described herein. The software can be embodied in a downloaded application (app) that can be stored on the mobile device. Being portable, a person or machine can, in effect, carry an object-identification device capable of recognizing objects captured by the camera(s) of the mobile device. For example, a person with such a device can run the software, approach a table (i.e., support surface) holding various objects, point the device (i.e., its camera(s)) at each object, capture an image of an object, and be told the type (identity) of the object. To obtain the identity of the object, the mobile device may communicate with a remote server that hosts the DNN, sending the image to the remote server, and receiving the identity of the object.) It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify Mirza, further incorporating Chakravarty in video/camera technology. One would be motivated to do so, to incorporate at least one of the plurality of cameras and the at least one processor are disposed in a single unit. This functionality will improve efficiency with predictable results. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to NASIM N NIRJHAR whose telephone number is (571) 272-3792. The examiner can normally be reached on Monday - Friday, 8 am to 5 pm ET. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William F Kraig can be reached on (571) 272-8660. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /NASIM N NIRJHAR/Primary Examiner, Art Unit 2896
Read full office action

Prosecution Timeline

Mar 22, 2024
Application Filed
Jun 03, 2025
Non-Final Rejection — §103
Dec 04, 2025
Response Filed
Jan 25, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598324
DEPTH DIFFERENCES IN PLACE OF MOTION VECTORS
2y 5m to grant Granted Apr 07, 2026
Patent 12593131
VELOCITY MATCHING IMAGING OF A TARGET ELEMENT
2y 5m to grant Granted Mar 31, 2026
Patent 12593074
SYSTEMS AND METHODS OF BUFFERING IMAGE DATA BETWEEN A PIXEL PROCESSOR AND AN ENTROPY CODER
2y 5m to grant Granted Mar 31, 2026
Patent 12587662
METHOD, APPARATUS AND STORAGE MEDIUM FOR IMAGE ENCODING/DECODING
2y 5m to grant Granted Mar 24, 2026
Patent 12587628
DISPLAY DEVICE AND METHOD OF DRIVING THE SAME
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
74%
Grant Probability
93%
With Interview (+18.7%)
2y 6m
Median Time to Grant
Moderate
PTA Risk
Based on 512 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month