Prosecution Insights
Last updated: April 19, 2026
Application No. 18/846,716

CALIBRATION METHOD FOR AN ELECTRONIC DISPLAY SCREEN FOR TOUCHLESS GESTURE CONTROL

Final Rejection §103
Filed
Sep 13, 2024
Examiner
GUPTA, PARUL H
Art Unit
2627
Tech Center
2600 — Communications
Assignee
Ameria AG
OA Round
2 (Final)
61%
Grant Probability
Moderate
3-4
OA Rounds
2y 11m
To Grant
94%
With Interview

Examiner Intelligence

Grants 61% of resolved cases
61%
Career Allow Rate
375 granted / 617 resolved
-1.2% vs TC avg
Strong +33% interview lift
Without
With
+33.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
14 currently pending
Career history
631
Total Applications
across all art units

Statute-Specific Performance

§101
1.9%
-38.1% vs TC avg
§103
71.3%
+31.3% vs TC avg
§102
15.2%
-24.8% vs TC avg
§112
6.4%
-33.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 617 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-15 are rejected under 35 U.S.C. 103 as being unpatentable over Shimaoka et al., US Patent Publication 2019/0114801 in view of Yoshizawa et al., US Patent Publication 2020/0389691. Regarding independent claim 1, Shimaoka et al. teaches a computer-implemented method of calibrating an electronic display screen (paragraph 0010 recites “An interactive interface system calibration method of…performing calibration between a display position on the display screen and a detection position by the sensor device, based on a detection result given by the sensor device.”) for control (paragraph 0037 explains that the objects sensed include hands of the user, rendering obvious gesture detection and control) using a calibration device having a calibration pattern (paragraph 0046 explains the calibration pattern used by the device), wherein the method comprises: detecting, using at least one depth camera, the calibration pattern of the calibration device while the calibration device is placed on the electronic display screen in a calibration position (paragraph 0025 explains the sensing of the markings of the pattern and paragraphs 0057-0058 discloses that “the position obtainment unit 32 of the controller 3 may obtain the two-dimensional image and the distance image which represent the work area 110, from the sensor device 5. The position obtainment unit 32 detects the marks 63 of the plurality of three-dimensional markers 60 from the two-dimensional image by the template matching” and where paragraphs 0034-0037 explain that “the sensor device 5 includes the infrared camera 52 and the RGB camera 53 which serve as an image sensor” and that the “infrared irradiator 51 and the infrared camera 52 form a distance image sensor” that render obvious depth camera as given further in paragraphs 0082-0084); determining borders of the electronic display screen based at least on the detected calibration pattern (paragraph 0058 explains how the calibration pattern is used to determine the details of the screen layout), a reference pattern which is usable for determining an orientation of the detected calibration pattern (paragraph 0077 explains the use of the frame line for positioning the plate member 70 of figure 10 that would guide the orientation), and screen dimension information with respect to the electronic display screen (paragraph 0051 explains how the size or dimensions are determined from the parameters), wherein the reference pattern is a digital representation or replica of the calibration pattern (paragraphs 0053 and 0077 explain that the dots D1 to D11 of figures 5 and 10 are used as a digital representation of the positions of the markers as part of the frame line used for positioning); and defining a control input area for the electronic display screen being observable by the at least one depth camera (paragraphs 0035 and 0037 explain that the sensor device 5 detects the positions of the hands of the worker present in the work area 110 where the work area 110 is given to be “a work surface 101 which is an upper surface of the cooking counter 100, and a space above it” as given in paragraph 0031. Thus, the work area 110 must be determined before it can be used for detecting objects interacting with the display screen). Shimaoka et al. does not specify that the input is touchless gesture control input. Yoshizawa et al. teaches input as being touchless gesture control input (paragraph 0075 describes the use of gesture control). It would have been obvious to one of ordinary skill in the art before the effective filing date to use touchless gesture control input as taught by Yoshizawa et al. in the system of Shimaoka et al. The rationale to combine would be to have a way to control that does not require the user to learn complicated input methods (paragraph 0185 of Yoshizawa et al.). Regarding claim 2, Shimaoka et al. teaches the method of claim 1, further comprising: displaying a calibration guiding mark on the electronic display screen for guiding a user to place the calibration device in the calibration position (paragraph 0077 describes the dots indicating the positions of the markers on the screen, as explained further in paragraph 0078). Regarding claim 3, Shimaoka et al. teaches the method of claim 2, wherein the calibration guiding mark comprises at least two, preferably three, markings being displayed on the electronic display screen (paragraphs 0077-0078 explain the dots on the screen used to indicate the positions of the markers). Regarding claim 4, Shimaoka et al. teaches the method according to claim 2, wherein the calibration guiding mark comprises at least one orientation reference marking for unequivocally guiding a user to place the calibration device on the electronic display screen in a specific rotational orientation with respect to a plane-normal of the electronic display screen (paragraph 0077 explains the use of the frame line for positioning the plate member 70 of figure 10 that would guide the specific orientation). Regarding claim 5, Shimaoka et al. teaches the method of claim 1, wherein at least the steps of detecting the calibration pattern, determining borders of the electronic display screen and defining a control input area are triggered upon receiving a user input or upon automatically detecting that the calibration device is in the calibration position (paragraph 0050 explains how the calibration operation is based on timing or a manual operation input from the user and paragraph 0051 goes on to explain that input is used to determine the size and screen area). Shimaoka et al. does not specify that the input is touchless gesture control input. Yoshizawa et al. teaches input as being touchless gesture control input (paragraph 0075 describes the use of gesture control). It would have been obvious to one of ordinary skill in the art before the effective filing date to use touchless gesture control input as taught by Yoshizawa et al. in the system of Shimaoka et al. The rationale to combine would be to have a way to control that does not require the user to learn complicated input methods (paragraph 0185 of Yoshizawa et al.). Regarding claim 6, Shimaoka et al. teaches the method of claim 1, wherein determining borders of the electronic display screen comprises at least one base transformation operation of a coordinate system (paragraph 0051 explains how the projection is made based on a displacement between the center of the sensed position of the device and the screen where a displacement is the base transformation of the coordinate system). Regarding claim 7, Shimaoka et al. teaches the method of claim 1, wherein determining borders of the electronic display screen comprises: determining, using the at least one depth camera, a center of the calibration pattern in 3D and defining a coordinate system, wherein the center of the calibration pattern is the origin of the coordinate system (paragraph 0051 explains the sensing of the center that defines a coordinate system where paragraphs 0034-0037 explain that “the sensor device 5 includes the infrared camera 52 and the RGB camera 53 which serve as an image sensor” and that the “infrared irradiator 51 and the infrared camera 52 form a distance image sensor” that renders obvious a depth camera as given further in paragraphs 0082-0084); and shifting the origin of the coordinate system orthogonal to the electronic display screen surface so that the origin of the coordinate system is in the plane of the electronic display screen surface (paragraphs 0051-0053 explain how the displacement of the center position of the screen projected and the sensed center is used to correct or adjust the generated image). Regarding claim 8, Shimaoka et al. teaches the method of claim 1, wherein the screen dimension information with respect to the electronic display screen is received via user input and/or is determined automatically based on a resolution of the electronic display screen and based on a pixel density of the electronic display screen (paragraphs 0050-0051 explain how size information is received from the input of the input screen that is used by the user). Regarding claim 9, Shimaoka et al. teaches the method of claim 1, wherein detecting the calibration pattern of the calibration device is performed using two depth cameras (paragraph 0025 explains the sensing of the markings of the pattern where paragraphs 0034-0037 explain that “the sensor device 5 includes the infrared camera 52 and the RGB camera 53 which serve as an image sensor” and that the “infrared irradiator 51 and the infrared camera 52 form a distance image sensor” that are multiple cameras that render obvious depth cameras as given further in paragraphs 0082-0084); wherein the two depth cameras are arranged at borders of the electronic display screen (figure 2 shows the arrangements of the cameras to be opposite sides of the projected display. Any further movement of the cameras would be an obvious matter of rearrangement of parts. In re Japikse, 181 F.2d 1019, 86 USPQ 70 (CCPA 1950)). Regarding claim 10, Shimaoka et al. teaches the method of claim 1, wherein defining a control input area comprises a definition of a virtual screen layer being essentially parallel to the electronic display screen (paragraph 0031-0032 explain how the projection device projects an image that is used as the control input area at a virtual layer that may be parallel and at a distance from the display screen by projecting it using mirrors at different surfaces). Shimaoka et al. does not specify that the input is touchless gesture control input. Yoshizawa et al. teaches input as being touchless gesture control input (paragraph 0075 describes the use of gesture control). It would have been obvious to one of ordinary skill in the art before the effective filing date to use touchless gesture control input as taught by Yoshizawa et al. in the system of Shimaoka et al. The rationale to combine would be to have a way to control that does not require the user to learn complicated input methods (paragraph 0185 of Yoshizawa et al.). Regarding claim 11, Shimaoka et al. teaches the method of claim 1, wherein the calibration pattern is a fiducial marker and/or a three-dimensional pattern (paragraph 0046 explains that the marker of the calibration pattern can be a three-dimensional pattern in the shape of a prism). Regarding claim 12, Shimaoka et al. teaches the method of claim 1, further comprising outputting a signal upon starting, successfully ending, aborting, and/or failing the calibration of the electronic display screen (paragraph 0044 explains how starting and ending and all functions of calibration are performed where a signal must inherently be sent to cause these actions to occur). Regarding claim 13, Shimaoka et al. teaches a calibration device for use in the method according to claim 1 for calibrating an electronic display screen (as given in paragraph 0046), the calibration device comprising: a main body defining a main axis (M) of the calibration device (paragraph 0046 describes the “display part 62 with a rectangular plate shape extending upward from one end in a length of the pedestal 61” as depicted in figure 6), wherein the main axis (M) is a main central longitudinal axis of the main body (figure 6 shows that the portion containing mark 63 is a main central longitudinal portion with an axis M); a footing at a distal end of the main body, the footing having at least one footing surface being placeable on the electronic display screen such that the footing surface is in contact with to the electronic display screen (paragraph 0046 describes the “pedestal 61 with a rectangular plate shape to be placed on the work surface 101 of the cooking counter 100” as depicted in figures 6-8 and 10), wherein the footing surface is designed as a flat plate part having a plate area that prevents accidental tilting of the calibration device on the electronic display screen while a user places the calibration device on the electronic display screen (figure 6 shows the footing portion containing pedestal 61 as a flat plate area that does not tilt since it is a flat surface that is stable when placed); and a calibration pattern comprising a machine-readable pattern (paragraph 0046 explains “There is an inverted triangle mark 63 provided to a surface of the display part 62 by appropriate methods such as printing, painting, or using tape. The mark 63 is an isosceles triangle with one side defined as an upper side of the display part 62 and a vertex defined as a midpoint of a lower side of the display part 62. The mark 63 provided to the display part 62 has a lower end which indicates a contact point with the display screen (the work surface 101). Note that, shapes of the three-dimensional marker 60 and the mark 63 may be modified appropriately. For example, the three-dimensional marker 60 may have a shape of a pillar such as a square prism and a triangular prism, or a shape of a pyramid shape such as a three-sided pyramid and a four-sided pyramid” as depicted in figure 6) being detectable by at least one depth camera (paragraph 0025 explains the sensing of the markings of the pattern and paragraphs 0057-0058 discloses that the “position obtainment unit 32 of the controller 3 may obtain the two-dimensional image and the distance image which represent the work area 110, from the sensor device 5. The position obtainment unit 32 detects the marks 63 of the plurality of three-dimensional markers 60 from the two-dimensional image by the template matching” and where paragraphs 0034-0037 explain that “the sensor device 5 includes the infrared camera 52 and the RGB camera 53 which serve as an image sensor” and that the “infrared irradiator 51 and the infrared camera 52 form a distance image sensor” that are multiple cameras that render obvious depth cameras as given further in paragraphs 0082-0084), wherein the at least one depth camera is arranged at or near the electronic display screen and is configured to observe a spatial area in front of the electronic display screen in order to detect a gesture input of a user (paragraph 0034 explains that “the sensor device 5 is placed in one direction when viewed from the work surface 101 serving as the display screen, and is placed close to one side of the display screen (the work surface 101) (in the present embodiment, a front side). In the present embodiment, the sensor device 5 is not placed to entirely surround the display screen, but a position of an object overlapping the display screen is detected by use of the sensor device 5 placed in one direction when viewed from the display screen.” and paragraph 0038 continues that “the sensor device 5 can detect a position in height (upward and downward direction) of an object present in the work area 110. Accordingly, the controller 3 can determine whether an object is in contact with the display screen (the work surface 101), based on the two-dimensional image and the distance image input from the sensor device 5” where paragraph 0037 explains that the object may be hands of the user). Regarding claim 14, Shimaoka et al. teaches a data processing apparatus comprising means for carrying out the method of claim 1 (described in paragraphs 0023 and 0030). Regarding claim 15, Shimaoka et al. teaches a non-transitory storage medium having stored thereon computer program instructions which, when the computer program instructions are executed by a computer, cause the computer to carry out the method of claim 1 (as given in paragraphs 0041, 0072, and 0074). Response to Arguments Applicant's arguments filed 3/7/26 have been fully considered but they are not persuasive. Applicant contends that Shimaoka does not disclose a reference pattern that is a digital representation or replica of the calibration pattern. The examiner disagrees. Paragraphs 0053 and 0077 explain that the dots D1 to D11 of figures 5 and 10 are used as a digital representation of the positions of the markers as part of the frame line used for positioning. Applicant contends that Shimaoka does not disclose the same main body and footing structure as claimed. The examiner disagrees. Figure 6 shows a vertical main body with a horizontal footing. The two pieces combine to form the marker. The marker has a main part with mark 63 and footing with pedestal 61. They combine to form a structure that is stable and does not tilt. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to PARUL H GUPTA whose telephone number is (571)272-5260. The examiner can normally be reached Monday through Friday, from 10 AM to 7 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ke Xiao can be reached at 571-272-7776. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PARUL H GUPTA/Primary Examiner, Art Unit 2627
Read full office action

Prosecution Timeline

Sep 13, 2024
Application Filed
Sep 21, 2025
Non-Final Rejection — §103
Mar 07, 2026
Response Filed
Mar 24, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12593588
DISPLAY SUBSTRATE
2y 5m to grant Granted Mar 31, 2026
Patent 12585342
WRIST-WORN DEVICE CONTROL METHOD, RELATED SYSTEM, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12578913
DISPLAY METHOD, ELECTRONIC DEVICE, AND SYSTEM
2y 5m to grant Granted Mar 17, 2026
Patent 12579953
DISPLAY APPARATUS, CONTROL MODULE THEREOF AND DRIVE METHOD THEREFOR
2y 5m to grant Granted Mar 17, 2026
Patent 12579941
PIXEL DRIVING CIRCUIT AND DISPLAY PANEL
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
61%
Grant Probability
94%
With Interview (+33.0%)
2y 11m
Median Time to Grant
Moderate
PTA Risk
Based on 617 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month