Prosecution Insights
Last updated: April 19, 2026
Application No. 18/401,084

SYSTEM AND METHOD FOR CALIBRATING CAMERA POSITION IN A COMPUTER VISION-BASED SELF-SERVICE CHECKOUT TERMINAL

Non-Final OA §103
Filed
Dec 29, 2023
Examiner
NAVAS JR, EDEMIO
Art Unit
2483
Tech Center
2400 — Computer Networks
Assignee
Ncr Voyix Corporation
OA Round
3 (Non-Final)
71%
Grant Probability
Favorable
3-4
OA Rounds
2y 9m
To Grant
96%
With Interview

Examiner Intelligence

Grants 71% — above average
71%
Career Allow Rate
384 granted / 540 resolved
+13.1% vs TC avg
Strong +25% interview lift
Without
With
+24.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
31 currently pending
Career history
571
Total Applications
across all art units

Statute-Specific Performance

§101
3.2%
-36.8% vs TC avg
§103
60.1%
+20.1% vs TC avg
§102
23.5%
-16.5% vs TC avg
§112
8.2%
-31.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 540 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/17/2025 has been entered. Response to Arguments Applicant's arguments with respect to claims 1-6 have been considered but are moot in view of the new ground(s) of rejection. Regarding claim 1, applicant argues Doganis fails to disclose or suggest a camera “having a predefined expected orientation with respect to an actual scan zone for the self-service terminal” because no physical scan zone exists in Doganis, as instead the calibration target is intentionally movable as taught by Doganis, and is not used to confirm or adjust a camera’s physical installation orientation relative to a fixed terminal surface. However, reading the claims in the broadest reasonable interpretation, the examiner respectfully disagrees. Initially, it is noted by the examiner that there is no proper explanation of what exactly constitutes to as an “actual” scan zone, rather than just a scan zone, neither in the claim language nor in applicant’s specification. Additionally, as described in ¶0061, Doganis explains that the user may move the camera around the pattern, thus indeed aiming towards a scan zone in the form of the area in which the pattern is placed. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 2 and 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Doganis (U.S. PG Publication No. 2017/0372491) in view of Itaenen et al. (“Itae”) (U.S. PG Publication No. 2023/0316368). In regards to claim 1, Doganis teaches a system for camera position calibration in a terminal, comprising: a computing device having a processor (See ¶0067 in view of FIG. 7) and a non-transitory computer-readable storage medium (See ¶0067-0068); a camera (See ¶0002 and FIG. 7) coupled to the computing device having a predefined expected orientation with respect to an actual scan zone (See FIG. 2) for the terminal such that a target zone (See FIG. 3 in view of ¶0059-0060) representing the actual scan zone or an outline thereof appears in a predefined position (See FIG. 3 in view of ¶0059-0060 with regards to target area 102) within a field of view of the camera (See FIG. 3), the camera providing a video output signal to the computing device (See ¶0071 with regards to the video stream); a user interface coupled to the computing device having a display (See ¶0071 in view of FIG. 7 with regards to the interface and the display); and wherein the non-transitory computer-readable storage medium includes executable instructions that, when executed by the processor, cause the processor to: combine the video output signal from the camera with an overlay image file (See ¶0059 in view of FIG. 3 wherein “the screen also displays, superimposed to the video stream from the camera, a geometric shape 102 generated by the computer and representing a target area for the calibration pattern”); and provide the video output signal from the camera combined with the overlay image file to the display to show an actual orientation of the camera with respect to the target zone overlaid with the overlay image file (See FIG. 3 and 4 in view of ¶0059-0060) to allow a user to adjust a position of the camera so that the actual orientation of the camera with respect to the target zone matches the predefined expected orientation (See ¶0061 wherein the system moves “the camera around a static pattern instead of moving the pattern”). Doganis, however, fails to teach that the system is in the form of a self-service terminal. Doganis additionally fails to teach the overlay image file having a nontransparent section and a transparent section corresponding to the predefined position of the target zone within the field of view of the camera, wherein the overlay image file comprises a non-transparent mask region covering substantially the entire field of view of the camera except for a transparent window, the transparent window corresponding to the predefined position of the target zone within the field of view of the camera. In a similar endeavor Itae teaches that the system may be in the form of a self-service checkout system or kiosk as described in ¶0097 which is meant to image objects of interest in physical locations for reaching customers and facilitating merchant services. Itae additionally teaches the overlay image file having a nontransparent section and a transparent section corresponding to the predefined position of the target zone within the field of view of the camera (See ¶0016 and 0050 wherein portions not belonging to the target object are cropped or masked, thus applying overlay image data in the form of a mask, while the object region remains transparent), wherein the overlay image file comprises a non-transparent mask region covering substantially the entire field of view of the camera except for a transparent window, the transparent window corresponding to the predefined position of the target zone within the field of view of the camera (See ¶0016 in view of FIG. 2A-2D). It would have been obvious to a person of ordinary skill in the art, and before the effective filing date of the claimed invention, to incorporate the teaching of Itae into Doganis because it allows for a checkout system as described in ¶0097 which reaches customers and facilitates merchant services for real pathways available in interaction with them and for proper imaging of the target object to remain fully transparent while all other portions are masked in a non-transparent portion, thus fully concentrating on the desired target object image as described in ¶0119; it is also understood by one of ordinary skill in the art that such systems would require calibration, as is performed by Doganis’ teachings. Although not used in the rejection, the examiner points to ¶0119 in view of FIG. 10 of Chen wherein an image may be captured from which a mask image may then be created, with a portion marked as fully transparent which corresponds to the target object, while the non-transparent portion covers otherwise, thus the overall image shows the target object, wherein it is understood that a mask may be interpreted as overlay. In regards to claim 2, Doganis fails to explicitly teach the system of claim 1, wherein the camera is coupled to the computing device via a network connection and the video output signal is transmitted in a digital format. In a similar endeavor Itae teaches wherein the camera is coupled to the computing device via a network connection (See FIG. 5 and 7 in view of ¶0024 and 0037) and the video output signal is transmitted in a digital format (See ¶0065, 0091 and 0133 in view of FIG. 1). It would have been obvious to a person of ordinary skill in the art, and before the effective filing date of the claimed invention, to incorporate the teaching of Itae into Doganis because it allows for a checkout system as described in ¶0097 which reaches customers and facilitates merchant services for real pathways available in interaction with them and for proper imaging of the target object to remain fully transparent while all other portions are masked in a non-transparent portion, thus fully concentrating on the desired target object image as described in ¶0119; it is also understood by one of ordinary skill in the art that such systems would require calibration, as is performed by Doganis’ teachings. In regards to claim 6, Doganis teaches a method for camera position calibration in a self-service terminal, the self-service terminal including a camera (See ¶0002 and FIG. 7) coupled having a predefined expected orientation with respect to an actual scan zone (See FIG. 2) for the self-service terminal such that a target zone (See FIG. 3 in view of ¶0059-0060) representing the actual scan zone or an outline thereof appears in a predefined position (See FIG. 3 in view of ¶0059-0060 with regards to target area 102) within a field of view of the camera (See FIG. 3), comprising: combining a video output signal from the camera with an overlay image file (See ¶0059 in view of FIG. 3 wherein “the screen also displays, superimposed to the video stream from the camera, a geometric shape 102 generated by the computer and representing a target area for the calibration pattern”), and providing the video output signal from the camera combined with the overlay image file to a display to show an actual orientation of the camera with respect to the target zone overlaid with the overlay image file (See FIG. 3 and 4 in view of ¶0059-0060) to allow a user to adjust a position of the camera so that the actual orientation of the camera with respect to the target zone matches the predefined expected orientation (See ¶0061 wherein the system moves “the camera around a static pattern instead of moving the pattern”). Doganis, however, fails to teach that the system is in the form of a self-service terminal. Doganis additionally fails to teach the overlay image file having a nontransparent section and a transparent section corresponding to the predefined position of the target zone within the field of view of the camera; and wherein the overlay image file comprises a non-transparent mask region covering the substantially the entire field of view of the camera except for a transparent window, the transparent window corresponding to the predefined position of the target zone within the field of view of the camera. In a similar endeavor Itae teaches that the system may be in the form of a self-service checkout system or kiosk as described in ¶0097 which is meant to image objects of interest in physical locations for reaching customers and facilitating merchant services. Itae additionally teaches the overlay image file having a nontransparent section and a transparent section corresponding to the predefined position of the target zone within the field of view of the camera (See ¶0016 and 0050 wherein portions not belonging to the target object are cropped or masked, thus applying overlay image data in the form of a mask, while the object region remains transparent); and wherein the overlay image file comprises a non-transparent mask region covering the substantially the entire field of view of the camera except for a transparent window, the transparent window corresponding to the predefined position of the target zone within the field of view of the camera (See ¶0016 in view of FIG. 2A-2D). It would have been obvious to a person of ordinary skill in the art, and before the effective filing date of the claimed invention, to incorporate the teaching of Itae into Doganis because it allows for a checkout system as described in ¶0097 which reaches customers and facilitates merchant services for real pathways available in interaction with them and for proper imaging of the target object to remain fully transparent while all other portions are masked in a non-transparent portion, thus fully concentrating on the desired target object image as described in ¶0119; it is also understood by one of ordinary skill in the art that such systems would require calibration, as is performed by Doganis’ teachings. Although not used in the rejection, the examiner points to ¶0119 in view of FIG. 10 of Chen wherein an image may be captured from which a mask image may then be created, with a portion marked as fully transparent which corresponds to the target object, while the non-transparent portion covers otherwise, thus the overall image shows the target object, wherein it is understood that a mask may be interpreted as overlay. Claim(s) 3 is/are rejected under 35 U.S.C. 103 as being unpatentable over Doganis (U.S. PG Publication No. 2017/0372491) in view of Itaenen et al. (“Itae”) (U.S. PG Publication No. 2023/0316368) and Bedau (U.S. PG Publication No. 2023/0258593). In regards to claim 3, Doganis fails to teach the system of claim 1, wherein the camera transmits a composite video signal to the computing device and wherein the computing device digitizes the composite video signal and then combines the digitized composite video signal with the overlay. In a similar endeavor Bedau teaches wherein the camera transmits a composite video signal to the computing device and wherein the computing device digitizes the composite video signal and then combines the digitized composite video signal with the overlay (See ¶0148 in view of FIG. 13). It would have been obvious to a person of ordinary skill in the art, and before the effective filing date of the claimed invention, to incorporate the teaching of Bedau into Doganis because it allows for not only the digitization of video data as described in ¶0148, but also along with the additional functionalities provided by control logic and interface as seen in at least FIG. 13. Claim(s) 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Doganis (U.S. PG Publication No. 2017/0372491) in view of Mullins (U.S. PG Publication No. 2020/0327315) and Itaenen et al. (“Itae”) (U.S. PG Publication No. 2023/0316368). In regards to claim 4, Doganis teaches a system for camera position calibration in a self-service terminal, comprising: a computing device having a processor and a non-transitory computer-readable storage medium (See ¶0067-0070 in view of FIG. 7); a camera coupled to the computing device (See ¶0002 and FIG. 7), the camera having a predefined expected orientation with respect to a scan zone (See FIG. 2) for the terminal such that a target zone (See FIG. 3 in view of ¶0059-0060) representing the scan zone or an outline thereof appears in a predefined position (See FIG. 3 in view of ¶0059-0060 with regards to target area 102) within a field of view of the camera (See FIG. 3), the camera providing a respective video output signal to the computing device (See ¶0071 with regards to the video stream); a user interface coupled to the computing device having a user interface and a display (See ¶0071 in view of FIG. 7 with regards to the interface and the display); and wherein the non-transitory computer-readable storage medium includes executable instructions that, when executed by the processor, cause the processor to: combine the video output signal from the selected camera with an overlay image file (See ¶0059 in view of FIG. 3 wherein “the screen also displays, superimposed to the video stream from the camera, a geometric shape 102 generated by the computer and representing a target area for the calibration pattern”), and provide the video output signal from the selected camera combined with the overlay image file to the display to show an actual orientation of the selected camera with respect to the target zone overlaid with the overlay image file (See FIG. 3 and 4 in view of ¶0059-0060) to allow a user to adjust a position of the selected camera so that the actual orientation of the selected camera with respect to the target zone matches the predefined expected orientation (See ¶0061 wherein the system moves “the camera around a static pattern instead of moving the pattern”). Doganis, however, fails to explicitly teach a plurality of cameras, and allow a user to select one of the plurality of cameras for calibration using the user interface, the overlay image file having a nontransparent section and a transparent section corresponding to the predefined position of the target zone within the field of view of the selected camera; that the system is in the form of a self-service terminal; and wherein the overlay image file comprises a non-transparent mask region covering substantially the entire field of view of the selected camera except for a transparent window, the transparent window corresponding to the predefined position of the target zone within the field of view of the camera. Rather, Doganis only teaches the use of a single camera and, although a user and a user interface are taught, fails to specify that the user may use the interface to calibrate the camera(s). In a similar endeavor Mullins teaches a plurality of cameras (See ¶0027, 0080 and FIG. 11), allow a user to select one of the plurality of cameras for calibration using the user interface (See ¶0150, 0155 and FIG. 13). It would have been obvious to a person of ordinary skill in the art, and before the effective filing date of the claimed invention, to incorporate the teaching of Mullins into Doganis because it allows for each zone or area to be calibrated differently based on the type of capturing needed for each zone or area as described in at least ¶0150, thus enabling for a more robust and adaptable system. In a similar endeavor Itae teaches teaches that the system may be in the form of a self-service checkout system or kiosk as described in ¶0097 which is meant to image objects of interest in physical locations for reaching customers and facilitating merchant services. Additionally, Itae teaches the overlay image file having a nontransparent section and a transparent section corresponding to the predefined position of the target zone within the field of view of the selected camera (See ¶0016 and 0050 wherein portions not belonging to the target object are cropped or masked, thus applying overlay image data in the form of a mask, while the object region remains transparent); and wherein the overlay image file comprises a non-transparent mask region covering substantially the entire field of view of the selected camera except for a transparent window, the transparent window corresponding to the predefined position of the target zone within the field of view of the camera (See ¶0016 in view of FIG. 2A-2D). It would have been obvious to a person of ordinary skill in the art, and before the effective filing date of the claimed invention, to incorporate the teaching of Itae into Doganis because it allows for a checkout system as described in ¶0097 which reaches customers and facilitates merchant services for real pathways available in interaction with them and for proper imaging of the target object to remain fully transparent while all other portions are masked in a non-transparent portion, thus fully concentrating on the desired target object image as described in ¶0119; it is also understood by one of ordinary skill in the art that such systems would require calibration, as is performed by Doganis’ teachings. Although not used in the rejection, the examiner points to ¶0119 in view of FIG. 10 of Chen wherein an image may be captured from which a mask image may then be created, with a portion marked as fully transparent which corresponds to the target object, while the non-transparent portion covers otherwise, thus the overall image shows the target object, wherein it is understood that a mask may be interpreted as overlay. Claim(s) 5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Doganis (U.S. PG Publication No. 2017/0372491) in view of Mullins (U.S. PG Publication No. 2020/0327315) and Itaenen et al. (“Itae”) (U.S. PG Publication No. 2023/0316368), in further view of Navon et al. (“Navon”) (U.S. PG Publication No. 2023/0117218). In regards to claim 5, Doganis teaches the system of claim 4, the video output signal from each of the plurality of cameras is transmitted in a digital format (See ¶0059). Doganis, however, fails to teach wherein each of the plurality of cameras are coupled to the computing device via a network switch. In a similar endeavor Navon teaches wherein each of the plurality of cameras are coupled to the computing device via a network switch (See ¶0003 in view of FIG. 1). It would have been obvious to a person of ordinary skill in the art, and before the effective filing date of the claimed invention, to incorporate the teaching of Navon into Doganis because it allows for proper direction of packets to their destinations, even along complex networks as seen in FIG. 1 and described in ¶0003 for proper communication between a plurality of access network devices. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to EDEMIO NAVAS JR whose telephone number is (571)270-1067. The examiner can normally be reached M-F, ~ 9 AM -6 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Joseph Ustaris can be reached at 5712727383. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. EDEMIO NAVAS JR Primary Examiner Art Unit 2483 /EDEMIO NAVAS JR/Primary Examiner, Art Unit 2483
Read full office action

Prosecution Timeline

Dec 29, 2023
Application Filed
Apr 16, 2025
Non-Final Rejection — §103
Jul 17, 2025
Response Filed
Sep 10, 2025
Final Rejection — §103
Dec 03, 2025
Response after Non-Final Action
Dec 17, 2025
Request for Continued Examination
Dec 22, 2025
Response after Non-Final Action
Feb 11, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598398
Terminal Detection Platform
2y 5m to grant Granted Apr 07, 2026
Patent 12598283
METHOD AND DISPLAY APPARATUS FOR CORRECTING DISTORTION CAUSED BY LENTICULAR LENS
2y 5m to grant Granted Apr 07, 2026
Patent 12593141
INFORMATION MANAGEMENT DEVICE, INFORMATION MANAGEMENT METHOD, AND STORAGE MEDIUM FOR MANAGING INFORMATION PROVIDED TO A MOBILE OBJECT AND DEVICE USED BY A USER IN LOCATION DIFFERENT FROM THE MOBILE OBJECT
2y 5m to grant Granted Mar 31, 2026
Patent 12587686
SIGNALING FOR GENERAL CONSTRAINT INFORMATION IN VIDEO CODING
2y 5m to grant Granted Mar 24, 2026
Patent 12587643
IMAGE ENCODING/DECODING METHOD AND DEVICE, AND RECORDING MEDIUM IN WHICH BITSTREAM IS STORED FOR BLOCK DIVISION AT PICTURE BOUNDARY
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
71%
Grant Probability
96%
With Interview (+24.7%)
2y 9m
Median Time to Grant
High
PTA Risk
Based on 540 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month