Prosecution Insights
Last updated: April 19, 2026
Application No. 18/450,611

GESTURE RECOGNITION DEVICE

Final Rejection §101§102§103§112
Filed
Aug 16, 2023
Examiner
THIRUGNANAM, GANDHI
Art Unit
2672
Tech Center
2600 — Communications
Assignee
Quanta Computer Inc.
OA Round
2 (Final)
74%
Grant Probability
Favorable
3-4
OA Rounds
3y 7m
To Grant
86%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
413 granted / 559 resolved
+11.9% vs TC avg
Moderate +12% lift
Without
With
+12.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
42 currently pending
Career history
601
Total Applications
across all art units

Statute-Specific Performance

§101
9.6%
-30.4% vs TC avg
§103
35.8%
-4.2% vs TC avg
§102
21.5%
-18.5% vs TC avg
§112
27.1%
-12.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 559 resolved cases

Office Action

§101 §102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments filed 10/31/2025 have been fully considered but they are not persuasive. Regarding Applicant’s arguments (pg. 6), How does recognizing gestures through the images of a camera make using AR more intuitive and convenient? What of the many functions performed make the AR device more intuitive and convenient? How do the claims reflect the improvement when the improvement is not disclosed in paragraph 62? Not even the AR device is not even recited in the claim language. Simply adding an AR device would NOT be sufficient to overcome the 101, but would be a step in the right location. Paragraph 62 is merely alleging an improvement without specifically stating what is the improvement. Applicant’s argument is not persuasive. In response to Applicant’s argument (pg. 8), extracting, storing, determining, and performing are clearly all mental step. For example looking at a person doing sign language, a person extracts the positions of one or more hands, remembers (stores) the position of the hands, determines what the hand sign means and performs a response. Applicant’s reply clearly demonstrates that this is an abstract idea. In response to Applicant’s argument (pg. 9, paragraph 2-6), Paragraph 4 relates to Applicant’s Admitted Prior Art. None of what Applicant argues is recited in the claim. As Applicant clearly pointed out earlier the claims must reflect the improvement describe in the specification. The Examiner agrees that reducing weight, extending battery life, improving latency, reducing Lidar/lens use are all significant improvements. So how does the claim invention enable these improvements? Simply put gesture recognition is not an improvement. If the claim had a different way of doing gesture recognition, that may have been considered as an improvement. “Performing many functions” can not be considered as an improvement either. In response to applicant’s argument (pg. 12-13), Applicant highlighted the appropriate text which teaches when a match exists a command or instruction may be executed in accordance therewith. This clearly shows a relationship between content/operation and gesture pattern thus clearly showing “contents or operations about the relationship between the static gesture pattern and the dynamic gesture patterns. In response to Applicant’s arguments (pg. 13-14) the new amendments will be addressed below. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-10 are rejected under 35 U.S.C. 101 because the claimed invention is directed to abstract idea of recognizing hand signals without significantly more. The claim(s) recite(s) : “an image extraction device, extracting a first gesture image; “, which can reasonably be interpreted as person recognizing a hand gesture. “ “ “wherein “wherein the processor further obtains second gesture information corresponding to a second gesture image according to the second gesture image extracted by the image extracting device at a next time point, and determines whether the second gesture image corresponds to the first static gesture pattern;” which can reasonably be interpreted as a person visually obtaining a second gesture (either second letter of a word or second word of a sentence) and recognizing the second hand gesture. “wherein when the second gesture image corresponds to a second static gesture pattern of the plurality of static gesture patterns, the processor performs a function corresponding to the second static gesture pattern; and” which can be interpreted as mere outputting data. See MPEP 2016.05(g). Additionally it could be interpreted as a person responding to the gesture. Using the example above, if a person using sign language says hello, one could respond by saying hello back (either verbally or in sign language). “wherein when the second gesture image corresponds to the first static gesture pattern, the processor further determines whether the second gesture image corresponds to a first dynamic gesture pattern corresponding to the first static gesture pattern, to determine to perform a function corresponding to the first static gesture pattern, or perform a function corresponding to the first dynamic gesture pattern.” which can be interpreted as mere outputting data. See MPEP 2016.05(g). Additionally it could be interpreted as a person responding to the gesture. Using the example above, if a person using sign language says hello, one could respond by saying hello back (either verbally or in sign language). This judicial exception is not integrated into a practical application. The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements of storage circuit and processor, coupled to the extraction device and the storage device are merely reciting the words “apply it” (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f); Claim 2 is directed to an additional element of outputting and is not considered as significantly more. See MPEP 2106.05(g) Claims 6-8 and 10 are directed to the abstract idea without significantly more Claim 9 is directed to the abstract idea without significantly more. Claim 9 additionally recites an additional element of “deep learning algorithm”. The BRI of this term includes the human mind. Even if the DL algorithm was interpreted as a CNN, it would amount to an “apply it” step. See MPEP 2106.05(f). Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-2, 6-10 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Claim 1 recites “determines whether the second gesture image corresponds to the first static gesture pattern”. The Examiner is unable to find support for this limitation. Claim 1 recites “when the second gesture image corresponds to a second static gesture pattern of the plurality of static gesture patterns, the processor performs a function corresponding to the second static gesture pattern;” The Examiner is unable to find support for this limitation. Claim 1 recites “when the second gesture image corresponds to the first static gesture pattern, the processor further determines whether the second gesture image corresponds to a first dynamic gesture pattern corresponding to the first static gesture pattern, to determine to perform a function corresponding to the first static gesture pattern, or perform a function corresponding to the first dynamic gesture pattern” The Examiner is unable to find support for this limitation. Claims 2, 6-10 are rejected as dependent upon a rejected claim. Claim Objections Claim 1 objected to because of the following informalities: Claim 1 recites “wherein”, but should recite “further comprising” to be grammatically correct. Appropriate correction is required. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-2,6-7, 9 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by ElDokor (PGPUb 2010/0295783). ElDokor discloses 1. A gesture recognition device, comprising: an image extraction device, extracting a first gesture image; (ElDokor, Fig. 21 #2110 , Depth map containing one or more objects of interest) a storage circuit, storing a plurality of gesture patterns , wherein the plurality of gesture patterns comprise a plurality of static gesture patterns and a plurality of dynamic gesture patterns, wherein each dynamic gesture pattern corresponds to one of the plurality of static gesture pattern; and (ElDokor, Fig. 21 #2140 , Gesture Library/Database, see paragraph 79, “ stored gestures in a gesture library or database to determine the likelihood of a match. Implementations of gesture libraries or databases may include video segments or maps of the movement of particular points of the hand through time to enable the hidden Markov model to determine whether what stored gestures in the database could have been produced by the observed gesture.”) processor, coupled to the extraction device and the storage device, obtaining first gesture information corresponding to the first gesture image from the storage device according to the first gesture image, and (ElDokor, Fig. 21 #2140 , paragraph 79, “the resulting set of depth data frames that contain the gesture (or portions of the set of frames containing the gesture) are may be evaluated using a hidden Markov model and stored gestures in a gesture library or database to determine the likelihood of a match. Implementations of gesture libraries or databases may include video segments or maps of the movement of particular points of the hand through time to enable the hidden Markov model to determine whether what stored gestures in the database could have been produced by the observed gesture.”; paragraph 80, “ If the observed gesture is determined at step 2150 to be a static gesture, then implementations of the method may utilize a generative artificial neural network to determine whether the gesture matches one included in a gesture database”) wherein the processor selects a first static gesture pattern corresponding to the first gesture image from the plurality of static gesture patterns according to the first gesture information; wherein the processor further obtains second gesture information corresponding to a second gesture image according to the second gesture image extracted by the image extracting device at a next time point, and determines whether the second gesture image corresponds to the first static gesture pattern; (ElDokor, paragraph 79, “At step 2130, the method may then determine whether the depth data in one or more of the frames includes a gesture that is determined likely to be static or dynamic. A wide variety of methods may be used to make the decision, including, by non-limiting example, a time requirement, recognition of movement within a particular time interval, identification that particular hand features are visible within a frame, or any other method of determining whether the gesture is executed in a fixed or a moving fashion.”, where a time requirement requires determining if gesture between two frames remains the similar/same for a distinct time period, if they are it belongs to a static gesture pattern; paragraph 80, “If the observed gesture is determined at step 2150 to be a static gesture, then implementations of the method may utilize a generative artificial neural network to determine whether the gesture matches one included in a gesture database.“; paragraph 80 “If the network determines that a match exists at step 2160, then at step 2170 a command or instruction may be executed in accordance therewith.”) wherein when the second gesture image corresponds to a second static gesture pattern of the plurality of static gesture patterns, the processor performs a function corresponding to the second static gesture pattern; and (ElDokor, paragraph 80, “If the observed gesture is determined at step 2150 to be a static gesture, then implementations of the method may utilize a generative artificial neural network to determine whether the gesture matches one included in a gesture database.“; paragraph 80 “If the network determines that a match exists at step 2160, then at step 2170 a command or instruction may be executed in accordance therewith.”, where the static gesture is compared to a database hence a plurality of static gesture patterns) wherein when the second gesture image corresponds to the first static gesture pattern, the processor further determines whether the second gesture image corresponds to a first dynamic gesture pattern corresponding to the first static gesture pattern, to determine to perform a function corresponding to the first static gesture pattern, or perform a function corresponding to the first dynamic gesture pattern. (Eldokor, pargraph 79, “If the gesture is determined to be dynamic at step 2130, the processing passes to step 2140, and the resulting set of depth data frames that contain the gesture (or portions of the set of frames containing the gesture) are may be evaluated using a hidden Markov model and stored gestures in a gesture library or database to determine the likelihood of a match. Implementations of gesture libraries or databases may include video segments or maps of the movement of particular points of the hand through time to enable the hidden Markov model to determine whether what stored gestures in the database could have been produced by the observed gesture.”; paragraph 80, “paragraph 80 “If the network determines that a match exists at step 2160, then at step 2170 a command or instruction may be executed in accordance therewith.”,”) ElDokor discloses 2. The gesture recognition device of claim 1, further comprising: an optical machine device, displaying a display result corresponding to the function. (ElDokor, paragraph 38, “When one or more gestures performed by an imaged individual are recognized, this individual 208 can execute and interact with the computer 206 or any other system in communication with computer 206 and may also provide feedback to the individual 208 through display 204.”) ElDokor discloses 6. The gesture recognition device of claim 5, wherein when the recognition circuit determines that the second gesture image corresponds to the dynamic gesture pattern corresponding to the selected static gesture pattern, the recognition circuit performs the function corresponding to the dynamic gesture pattern.(See claim 5 above, if the sequence is classified as a dynamic gesture, the ElDokor would perform the command to the dynamic gesture) ElDokor discloses 7. The gesture recognition device of claim 5, wherein when the recognition circuit determines that the second gesture image does not correspond to the dynamic gesture pattern corresponding to the selected static gesture pattern, the recognition circuit performs the function corresponding to the static gesture pattern corresponding to the second gesture image. (See claim 5 above, if the sequence is not classified as a dynamic gesture, the ElDokor would perform the command to the static gesture) ElDokor discloses 8. The gesture recognition device of claim 4, wherein the recognition circuit further selects the static gesture pattern corresponding to the first gesture image from the plurality of static gesture patterns according to a classification tree. (ElDokor, paragraph 80,” If the observed gesture is determined at step 2150 to be a static gesture, then implementations of the method may utilize a generative artificial neural network to determine whether the gesture matches one included in a gesture database.”, where the database is the classification tree) ElDokor discloses 9. The gesture recognition device of claim 1, wherein the recognition circuit further obtains the first gesture information corresponding to the first gesture image according to a deep learning algorithm (ElDokor, Fig. 21 ,#2150, Generative Artificial neural Network ). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over ElDokor in view of Pisharady (“Recent methods and databases in vision-based hand gesture recognition: A review”) ElDokor discloses 10. The gesture recognition device of claim 1, But doesn’t expressly disclose “wherein the first gesture information comprises angle information and direction information corresponding to fingers.” Pisharady discloses “wherein the first gesture information comprises angle information and direction information corresponding to fingers.” (Pisharady, Fig. 1, PNG media_image1.png 582 796 media_image1.png Greyscale ) It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to use the characterizations of Pisharady in order to determine gestures in ElDokor. The suggestion/motivation for doing so would have been using well known, commonly used gesture information to provide consistent results. Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine ElDokor in view of Pisharady to obtain the invention as specified in claim 10. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to GANDHI THIRUGNANAM whose telephone number is (571)270-3261. The examiner can normally be reached M-F 8:30-5PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sumati Lefkowitz can be reached at 571-272-3638. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /GANDHI THIRUGNANAM/ Primary Examiner, Art Unit 2672
Read full office action

Prosecution Timeline

Aug 16, 2023
Application Filed
Aug 01, 2025
Non-Final Rejection — §101, §102, §103
Oct 31, 2025
Response Filed
Feb 04, 2026
Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597135
SYSTEMS AND METHODS FOR UPDATING A GRAPHICAL USER INTERFACE BASED UPON INTRAOPERATIVE IMAGING
2y 5m to grant Granted Apr 07, 2026
Patent 12561963
CROSS-MODALITY NEURAL NETWORK TRANSFORM FOR SEMI-AUTOMATIC MEDICAL IMAGE ANNOTATION
2y 5m to grant Granted Feb 24, 2026
Patent 12555291
METHOD FOR AUTOMATED REGULARIZATION OF HYBRID K-SPACE COMBINATION USING A NOISE ADJUSTMENT SCAN
2y 5m to grant Granted Feb 17, 2026
Patent 12541869
GRAIN FLAKE MEASUREMENT SYSTEM, GRAIN FLAKE MEASUREMENT METHOD, AND GRAIN FLAKE COLLECTION, MOVEMENT, AND MEASUREMENT SYSTEM
2y 5m to grant Granted Feb 03, 2026
Patent 12525007
TRAINING METHOD AND ELECTRONIC DEVICE
2y 5m to grant Granted Jan 13, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
74%
Grant Probability
86%
With Interview (+12.3%)
3y 7m
Median Time to Grant
Moderate
PTA Risk
Based on 559 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month