Prosecution Insights
Last updated: April 19, 2026
Application No. 18/098,953

METHOD FOR NON-CONTACT TRIGGERING OF BUTTONS

Non-Final OA §103
Filed
Jan 19, 2023
Examiner
DULANEY, KATHLEEN YUAN
Art Unit
2666
Tech Center
2600 — Communications
Assignee
M'Ai Touch Technology Co. Ltd.
OA Round
3 (Non-Final)
77%
Grant Probability
Favorable
3-4
OA Rounds
3y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
504 granted / 653 resolved
+15.2% vs TC avg
Strong +24% interview lift
Without
With
+24.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
32 currently pending
Career history
685
Total Applications
across all art units

Statute-Specific Performance

§101
21.2%
-18.8% vs TC avg
§103
33.1%
-6.9% vs TC avg
§102
16.3%
-23.7% vs TC avg
§112
26.4%
-13.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 653 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION The response received on 2/5/2026 has been placed in the file and was considered by the examiner. An action on the merit follows. Response to Amendment The amendments filed on 2026 February 5 have been fully considered. Response to these amendments is provided below. Summary of Amendment/ Arguments and Examiner’s Response: The applicant amends the independent claims and summarizes the amendments on pages 11-12 of the remarks. On page 12, the applicant continues to argue that Preston does not receive “time series data”, only reflected light. ON page 13, the applicant admits Nikovski teaches an RGBD camera but doesn’t disclose sensing time series data. The examiner disagrees. In Preston, the reflected light is not captured at only one instance of time, but over a period of time, because in order to function, the data must be taken over a period of time (pages 2-3, paragraph 20-22). However, even if, arguably, Preston does not teach time series data, Nikovski also discloses time series data (page 2, paragraph 17, page 3, paragraph 20). The applicant argues on pages 13-14 that the same arguments apply to the other claims. As explained above, and in the rejection below, the prior art teaches the claimed limitations. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-6, 11-18 and 23-24 are rejected under 35 U.S.C. 103(a) as being unpatentable over U.S. Patent Application Publication No. 20150232300 (Preston) in view of U.S. Patent Application Publication No. 20220324676 (Nikovski et al). Regarding claim 1, Preston discloses A non-contact button triggering method (pages 2-3, paragraphs 20-22), comprising: sensing time series data, i.e. the sensed information of page 2, paragraph 20 as objects pass through the optical curtain, with a sensor (fig. 2, item 102) arranged on an operation panel, the panel in which fig. 2, item 32 and 102 is attached, wherein the time series data includes at least one optical data, i.e. the optically sensed data from the 3D zone described in page 2, paragraph 18, and a range of optical data of the at least one optical data covers a plurality of buttons arranged on the operation panel (page 2, paragraph 18); determining whether the optical data contains a target object by a system module (page 2, paragraph 20); determining tip coordinate of a tip of the target object by the system module when the optical data contains the target object by detecting the precise location of the disturbance, and thus the tip of the object that causes the disturbance (page 2, paragraph 20), comprising determining the point with the closest distance to the operation panel to be a tip (page 2, paragraph 20, fig. 6A); determining button information corresponding to the tip coordinate among a plurality of button information by the system module (page 2, paragraph 21), and transmitting a control signal at least according to the button information, wherein the plurality of button information is associated with the plurality of buttons(page 3, paragraph 21); and receiving the control signal by a controller, and performing control operation according to the control signal, i.e. the driving of the elevator (page 3, paragraph 21). Preston does not discloses expressly that the optical data includes an image and that determining the tip coordinate of a tip comprises identifying a plurality of protruding points of the target object by the system module and determining one of the plurality of protruding points with the closest distance to the surface to be the tip, wherein the sensor is a 3D sensor for capturing a plurality of images at a certain time interval as the time series data. Nikovski et al discloses optical data for touchless elevator button control includes an image, since an RGBD camera is used which creates images (page 5, paragraph 48). Nikovski et al further discloses determining the tip coordinate of the tip of the target object comprises: identifying a plurality of protruding points of the target object by the system module, i.e. the points of the fingertip which are protruding points since the finger protrudes from the hand (page 6, paragraph 68, fig. 4, item 406), and determining one of the plurality of protruding points with the closest distance to the operation panel to be the tip (fig. 4, item 408), wherein the sensor is a 3D sensor, an RGBD camera (page 5, paragraph 48) for capturing a plurality of images at a certain time interval as the time series data (page 2, paragraph 17, page 3, paragraph 20, page 7, paragraph 79). Preston and Nikovski et al are combinable because they are from the same field of endeavor, i.e. position detection for touchless interfaces. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to capture optical images. The suggestion/motivation for doing so would have been to provide a more convenient, robust method by capturing data in a regular manner. Therefore, it would have been obvious to combine the method of Preston with image capture of Nikovski et al to obtain the invention as specified in claim 1. Claim 13 is rejected for the same reasons as claim 1. Thus, the arguments analogous to that presented above for claim 1 are equally applicable to claim 13. Claim 13 distinguishes from claim 1 only in that claim 13 is a device claim, comprising a sensor to carry out the sensing, a system module to carry out the determining steps and a controller that carries out the receiving step. Preston teaches further this feature, i.e. a device (fig. 4), comprising a sensor to carry out the sensing (fig. 4, item 102), a system module to carry out the determining steps (fig. 4, item 106) and a controller that carries out the receiving step (page 3, paragraph 20, fig. 4, item 40). Regarding claim 2, Preston discloses determining whether the image contains the target object comprises: identifying an object in the image by the system module to generate a classification result, i.e. identifying a shape of a finger and classifying it as that of a passenger’s finger (page 3, paragraph 22); and determining whether the object is the target object according to the classification result, the result of generally matching (page 3, paragraph 22). Regarding claim 3, Nikovski et al discloses determining whether the image contains the target object comprises inputting the time series data into a machine learning model for determination (page 2, paragraph 11). Regarding claim 4, Nikovski et al discloses also transmitting the control signal at least according to the button information (fig. 4, item 424) further comprises: determining a score of the button information corresponding to the tip coordinate by the system module (fig. 4, item 416), and transmitting the control signal according to the button information only when the score is the highest score among scores of the plurality of button information, reaches a threshold (fig. 4, item 418), or both within a calculation period. Regarding claim 5, Preston discloses registering the button information, which comprises: identifying whether the target object is a hand, a finger (page 3, paragraph 22) and whether the target object is a first gesture, i.e. a non-contact activation of page 3, paragraph 23, or a second gesture by the system module, i.e. a physically activated button (page 3, paragraph 23); enabling a first mode when the system module identified that the target object is the hand and is the first gesture, in which the first gesture activates the button indicated by the gesture (page 3, paragraph 23); determining a gesture tip coordinate of a gesture tip of the first gesture by the system module, by finding the exact location of disturbance (page 3, paragraph 23); according to a positive match, generate first button information of the plurality of button information, i.e. generating the match of the activation of the first button (page 3, paragraph 23); and disabling the first mode when the system module identified that the target object is the hand, a finger (page 3, paragraph 22) and is the second gesture, i.e. a button push that does not is inconsistent the activated non-contact button, and thus disabling the first mode of the first button (page 3, paragraph 23). Nikovski et al discloses calculating a first threshold range according to the gesture tip coordinate, i.e. by calculating if the threshold is within range of the tip of the finger (fig. 4, items 410, 418), and associating the first threshold range with a first button accords to a positive match (fig. 4, item 418 proceeds to item 422). Regarding claim 6, Nikovski et al discloses determining the button information corresponding to the tip coordinate among the plurality of button information comprises: determining that the tip coordinate corresponds to the first button information when the tip coordinate is within the first threshold range (fig. 4, item 410). Regarding claim 11, Nikovski et al discloses the sensor is a 3D sensor, i.e. an RGBD sensor (page 5, paragraph 48) and the time series data includes the image and depth information, since the data from fig. 4, includes RGB (image) and D (Depth) information (page 4, paragraph 48). Regarding claim 12, Preston discloses the sensor is arranged above the plurality of buttons, and the range of the image does not cover a user's face (fig. 4-6B). Claims 14-18 and 23- 24 are rejected for the same reasons as claims 2-6 and 11-12, respectively. Thus, the arguments analogous to that presented above for claims 2-6, 11- 12 are equally applicable to claims 14-18, 23- 24. Claims 14-18, 23- 24 distinguish from claims 2-6, 11- 12 only in that they have different dependencies, both of which have been previously rejected. Therefore, prior art applies. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to KATHLEEN YUAN DULANEY whose telephone number is (571)272-2902. The examiner can normally be reached M1:9am-5pm, th1:9am-1pm, fri1 9am-3pm, m2: 9am-5pm, t2:9-5 th2:9am-5pm, f2: 9am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emily Terrell can be reached at 5712703717. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KATHLEEN Y DULANEY/Primary Examiner, Art Unit 2666 3/2/2026
Read full office action

Prosecution Timeline

Jan 19, 2023
Application Filed
Aug 18, 2025
Non-Final Rejection — §103
Nov 12, 2025
Response Filed
Dec 03, 2025
Final Rejection — §103
Feb 05, 2026
Request for Continued Examination
Feb 17, 2026
Response after Non-Final Action
Mar 08, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602801
IMAGE PROCESSING CIRCUITRY AND IMAGE PROCESSING METHOD FOR DEPTH ESTIMATION IN A TIME-OF-FLIGHT SYSTEM
2y 5m to grant Granted Apr 14, 2026
Patent 12602930
METHOD AND SYSTEM FOR CONTINUOUSLY TRACKING HUMANS IN AN AREA
2y 5m to grant Granted Apr 14, 2026
Patent 12593019
INFORMATION PROCESSING APPARATUS USING PARALLAX IN IMAGES CAPTURED FROM A PLURALITY OF DIRECTIONS, METHOD AND STORAGE MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12586242
Method, System, And Computer Program For Recognizing Position And Attitude Of Object Imaged By Camera
2y 5m to grant Granted Mar 24, 2026
Patent 12586165
APPARATUS AND METHOD FOR RECONSTRUCTING IMAGE USING MOTION DEBLURRING
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
77%
Grant Probability
99%
With Interview (+24.0%)
3y 2m
Median Time to Grant
High
PTA Risk
Based on 653 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month