Prosecution Insights
Last updated: April 19, 2026
Application No. 18/036,344

HAND DETECTION DEVICE, GESTURE RECOGNITION DEVICE, AND HAND DETECTION METHOD

Final Rejection §103
Filed
May 10, 2023
Examiner
HASKINS, TWYLER LAMB
Art Unit
2639
Tech Center
2600 — Communications
Assignee
Mitsubishi Electric Corporation
OA Round
2 (Final)
58%
Grant Probability
Moderate
3-4
OA Rounds
2y 0m
To Grant
42%
With Interview

Examiner Intelligence

Grants 58% of resolved cases
58%
Career Allow Rate
21 granted / 36 resolved
-3.7% vs TC avg
Minimal -16% lift
Without
With
+-16.0%
Interview Lift
resolved cases with interview
Fast prosecutor
2y 0m
Avg Prosecution
8 currently pending
Career history
44
Total Applications
across all art units

Statute-Specific Performance

§101
8.0%
-32.0% vs TC avg
§103
48.7%
+8.7% vs TC avg
§102
34.7%
-5.3% vs TC avg
§112
6.0%
-34.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 36 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged of papers submitted under 35 U.S.C. 119(a)-(d), which papers have been placed of record in the file. Claim Amendments Acknowledgement of receiving amendments to the claims, which were received by the office on 08/15/2025. Response to Arguments Applicant's arguments filed 08/15/2025 have been fully considered but they are not persuasive. In the remarks, applicant argues in substance: The applicant argues the prior art, particularly Dai does not disclose determining a difference in luminance between frames and that Dai does not disclose “calculating an inter-frame luminance difference of the image for hand detection,” as recited in claim 1 neither does Dai use an inter-frame luminance difference. The applicant also argues that Dai also discloses “the chrominance components are emphasized, and the luminosity is deemphasized.” Thus, Dai clearly teaches not to use luminosity to detect a hand. Applicant also argues there is no mention of an erroneous hand detection at all. Neither is there any discussion of a frame where a detection is made, nor referencing a frame before a detection frame to determine if an error in detection was made. Accordingly, Dai does not disclose or suggest "determining whether or not the detected hand has been erroneously detected on the basis of the luminance difference between a frame in which the hand has been detected and a frame immediately preceding thereof in the image for hand detection," as recited in claim 1. Applicant further argues that Nishida is not relied on to disclose features related to detecting a hand. Nishida does not disclose or suggest the limitations indicated above of claim 1, neither does the Office Action rely on Nishida to disclose these limitations. Accordingly, Applicant respectfully disagrees that the alleged combination of Dai and Nishida discloses the limitations of claim 1. Examiner’s Response: The Examiner respectfully disagrees with Applicant’s line of reasoning. The Examiner has thoroughly reviewed the Applicant arguments but respectfully believes that the cited reference to reasonably and properly meet the claimed limitations. Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims (See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). Further Applicant’s’ argument does not address the actual reasoning of the Examiner’s rejections. Instead Applicants attack the references singly for lacking teachings that the Examiner relied on a combination of references to show. It is well established that one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references (See In re Keller, 642 F.2d 413). The court requires that references must be read, not in isolation, but for what they fairly teach in combination with the prior art as a whole (See In re Keller, 642 F.2d 413, 425 (CCPA 1981); In re Merck & Co., 800 F.2d 1091 (Fed. Cir. 1986)). With a broad interpretation, Examiner understands Dai, as teaching that skin detection is dependent on illumination, and to reduce this dependency, the registration module adds real-time adaptability in choosing the parameters based on actual scenes [Para 67]. Dai further disclose generating a skin map and specific parametric template, which are updated at a regular time intervals by examining the ROI [Para 0068]. these adaptive updates inherently require analyzing frame to frame differences in illumination (luminance) to maintain correct hand detection. While [Para 73] teaches emphasizing chrominance and deemphasizing luminosity, the same passage disclose that a threshold on the digital color image is used to get pixels close to the registered hand skin tone values, allowing for changes in lighting and other environment conditions. Thus, Dai expressly accounts for luminance variation in detection and correction of hand identification. Dia discloses techniques for updating decision boundaries on the fly at reticular intervals by examining the ROI [para 0068]. This requires comparing current frame data with prior frame data to maintain accuracy. Dia further describes dynamically adjusting HSV and YCbCr parameters on a frame-by-frame basis for skin detection, which is equivalent to detecting and correcting erroneous hand detection between frames. Therefore, with this broad interpretation, Dai in combination with Nishida teaches, discloses or suggests the Applicant's invention, hand detector for detecting user's hand from video photographed in vehicle. Thus, due to Applicant's broad claim language, Applicant's invention is not far removed from the art of record. Accordingly, these limitations do not render claims patentably distinct over the prior art of record. As a result, it is respectfully submitted that the present application is not in condition for allowance. Thus, the Examiner maintains that limitations as presented and as rejected were properly and adequately met. The rejection as presented in the non-final rejection is maintained regarding to the above limitation. Additional citations and/or modified citations may be present to more concisely address limitations. However, the grounds of rejection remain the same. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3 and 10-20 are rejected under 35 U.S.C. 103 as being unpatentable over Dai et al. (US 2014/0253429) (Herein after referred to as Dai) in view of Nishida et al. (US 209/0102788 A1) (Herein after referred to as Nishida). Regarding claim 1, Dai teaches a hand detection device (Dai, figure 1, detection module 120, paragraph [0049], ..The human visual gestures 110 include…such as hand gesture positioned by a user hand…) comprising: a processor (Dai, figure 1, computer system 100) to execute a program, and a memory (Dai, figure 1, computer system 100) to store the program which, when executed by the processor, performs processes of, acquiring an image for hand detection (Dai, paragraphs [0049] – [0052]); calculating an inter-frame luminance difference of the image for hand detection (Dai, paragraphs [0054], [0055], [0068]); detecting a hand of a user from the image for hand detection (Dai, paragraphs [0054], [0055], [0068]); and determining whether or not the detected hand has been erroneously detected on the basis of the luminance difference between a frame in which the hand has been detected and a frame immediately preceding thereof in the image for hand detection (Dai, paragraphs [0068] and [0073]), wherein the processor is configured to calculate an inter-frame difference of an average luminance of the image for hand detection as a luminance difference, and the processor is configured to determine that the hand has been erroneously detected when the difference of the average luminance between a frame in which the hand has been detected and a frame immediately preceding thereof in the image for hand detection is lower than or equal to a predetermined threshold (Dai, paragraphs [0068] and [0073]). However, Dai does not specifically teach which is an image obtained by capturing a hand detection region inside a vehicle. In reference Nishida, Nishida teaches a manipulation input device that teaches a hand detector which is an image obtained by capturing a hand detection region inside a vehicle (Nishida, figure 3, [0029]-[0030]). These arts are analogous since they are both related to hand detection devices. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention (AIA ) to modify the invention of Dai with the capability to be utilized in a vehicle as taught by Nishida to increase the capability of detecting with high accuracy a gesture from a gesture from an image take by a camera in a variety of ambient environments while driving as taught by Nishida in paragraph 0007. Regarding claim 2, the combination of Dai and Nishida teaches the hand detection device according to claim 1, and also teaches wherein the image for hand detection is an image obtained by trimming a portion corresponding to the hand detection region from an image captured by a camera mounted in the vehicle (Nishida, figures 4A-4H, paragraphs [0032] - [0035]). Regarding claim 3, the combination of Dai and Nishida teaches the hand detection device according to claim 1, and also teaches wherein the hand detection region is a region between a driver’s seat and a passenger seat in the vehicle (Nishida, figures 3, 4A and 4B, paragraphs [0028] – [0029]). Regarding claim 10, the combination of Dai and Nishida teaches the hand detection device according to claim 1, and also teaches further comprising a camera configured to capture an image of the vehicle including the image for hand detection (Nishida, figures 3, 4A and 4B, paragraphs [0028] – [0029]). Regarding claim 11, the combination of Dai and Nishida teaches the hand detection device according to claim 10, and also teaches, wherein the camera is arranged in a central portion of a dashboard of the vehicle (Nishida, figures 3, 4A and 4B, paragraphs [0028] – [0029]). Regarding claim 12, the combination of Dai and Nishida teaches the hand detection device according to claim 1, and also teaches a gesture recognition recognizer configured to recognize a gesture made by the hand determined to be correctly detected ((Dai, figure 1, detection module 120, paragraph [0049], ...The human visual gestures 110 include…such as hand gesture positioned by a user hand…)). Claim 13 is rejected for the same reasons as claim 1. Regarding claim 14, Dai teaches a hand detection device (Dai, figure 1, detection module 120, paragraph [0049], ..The human visual gestures 110 include…such as hand gesture positioned by a user hand…) comprising: a processor (Dai, figure 1, computer system 100) to execute a program, and a memory (Dai, figure 1, computer system 100) to store the program which, when executed by the processor, performs processes of, acquiring an image for hand detection (Dai, paragraphs [0049] – [0052]); calculating an inter-frame luminance difference of the image for hand detection (Dai, paragraphs [0054], [0055], [0068]); detecting a hand of a user from the image for hand detection (Dai, paragraphs [0054], [0055], [0068]); and determining whether or not the detected hand has been erroneously detected on the basis of the luminance difference between a frame in which the hand has been detected and a frame immediately preceding thereof in the image for hand detection (Dai, paragraphs [0068] and [0073]), wherein the processor is configured to divide the image for hand detection into a plurality of blocks and calculate an inter-frame difference of a Histograms of Oriented Gradients (HOG) feature amount of each block as an inter-frame luminance difference (Dai, paragraphs [0065] – [0068]), and the processor is configured to determine that the hand detected by the hand has been erroneously detected when the number of blocks in which the difference in the HOG feature amount between a frame in which the hand has been detected and a frame immediately preceding thereof in the image for hand detection surpasses a predetermined threshold is lower than or equal to a certain number (Dai, paragraphs [0065] – [0068] and [0073]). However, Dai does not specifically teach which is an image obtained by capturing a hand detection region inside a vehicle. In reference Nishida, Nishida teaches a manipulation input device that teaches a hand detector which is an image obtained by capturing a hand detection region inside a vehicle (Nishida, figure 3, [0029]-[0030]). These arts are analogous since they are both related to hand detection devices. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention (AIA ) to modify the invention of Dai with the capability to be utilized in a vehicle as taught by Nishida to increase the capability of detecting with high accuracy a gesture from a gesture from an image take by a camera in a variety of ambient environments while driving as taught by Nishida in paragraph 0007. Claims 15-16 are rejected for same reasons as claims 2 and 3 above. Regarding claim 17, the combination of Dai and Nishida teaches he hand detection device according to claim 14, wherein the processor is configured to determine that the hand detected from the image for hand detection is correctly detected, after determination of the hand having been correctly detected, until the frame in which the number of blocks in which the difference in the HOG feature amount with the immediately preceding frame surpasses the threshold is lower than or equal to the certain number appears predetermined times in succession (Dai, paragraphs [0065] – [0068] and [0073]). Claims 18-19 are rejected for same reasons as claims 10 and 11 above. Claim 21 is rejected for the same reason as claim 14 above. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SUPERVISORY EXAMINER TWYLER HASKINS whose telephone number is (571)272-7406. The Supervisory Examiner can normally be reached Mon-Fri: 8am-4pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the Supervisory Examiner by telephone are unsuccessful, the examiner’s supervisor, Director John Barlow can be reached at (571) 272-4550. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TWYLER L HASKINS/Supervisory Patent Examiner, Art Unit 2639
Read full office action

Prosecution Timeline

May 10, 2023
Application Filed
May 21, 2025
Non-Final Rejection — §103
Aug 15, 2025
Response Filed
Sep 25, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12462520
OUTSIDE ENVIRONMENT RECOGNITION DEVICE AND OUTSIDE ENVIRONMENT RECOGNITION METHOD
2y 5m to grant Granted Nov 04, 2025
Patent 12437550
Method for Counting Passengers of a Public Transportation System, Control Apparatus and Computer Program Product
2y 5m to grant Granted Oct 07, 2025
Patent 12412270
METHOD AND SYSTEM FOR DETERMINING PROGRESSION OF ATRIAL FIBRILLATION BASED ON HEMODYNAMIC METRICS
2y 5m to grant Granted Sep 09, 2025
Patent 12149811
CAMERA MODULE HAVING A SOLDERING PORTION COUPLING A DRIVING DEVICE AND A CIRCUIT BOARD
2y 5m to grant Granted Nov 19, 2024
Patent 10880462
MINIATURE VIDEO RECORDER
2y 5m to grant Granted Dec 29, 2020
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
58%
Grant Probability
42%
With Interview (-16.0%)
2y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 36 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month