Prosecution Insights
Last updated: April 19, 2026
Application No. 19/143,318

BIOLOGICAL INFORMATION ACQUISITION ASSISTANCE DEVICE AND BIOLOGICAL INFORMATION ACQUISITION ASSISTANCE METHOD

Non-Final OA §103
Filed
Jun 25, 2025
Examiner
PATEL, PREMAL R
Art Unit
2624
Tech Center
2600 — Communications
Assignee
Panasonic Intellectual Property Management Co., Ltd.
OA Round
1 (Non-Final)
78%
Grant Probability
Favorable
1-2
OA Rounds
2y 5m
To Grant
84%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
744 granted / 955 resolved
+15.9% vs TC avg
Moderate +6% lift
Without
With
+6.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
22 currently pending
Career history
977
Total Applications
across all art units

Statute-Specific Performance

§101
2.9%
-37.1% vs TC avg
§103
50.9%
+10.9% vs TC avg
§102
18.5%
-21.5% vs TC avg
§112
19.8%
-20.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 955 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “an acquisition unit that acquires a captured image…; a detection unit that detects the at least one fingertip shown in the captured image; a generation unit that generates a first fingertip image obtained by cutting out a fingertip region including the detected fingertip; and a control unit that generates a biometric information acquisition screen including” in claim 1; “…an evaluation unit that calculates an evaluation value indicating…” in claim 14. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. Specification in para [0019] and [0020] describe the corresponding structure for each of the claimed units. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 2, 6-10, 13 and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kawashima et al. (2010/0194713) in view of Riddle et al. (2016/0180142). Regarding claim 1, Kawashima teaches a biometric information acquisition assistance device comprising: an acquisition unit that acquires a captured image of at least one fingertip captured in a contactless state (Fig 2; para [0095] The camera 12b takes and captures an image of the hand H including a finger.); a detection unit that detects the at least one fingertip shown in the captured image (para [0110] The software 103c makes the device to perform as a tip area detecting module, a tip area locating module, and a verifying module. The software 103c performs by using a memory region 1102a of the RAM 1102 for a fingertip point computing treatment. The software 103c performs a binarization of an image of a hand which is taken by the camera 12b. The software 103c performs a determination in which a fingertip position of the actual finger image is determined as a fingertip point.); a generation unit that generates a first fingertip image obtained by cutting out a fingertip region including the detected fingertip (Fig 7; Fig 8; para [0121] FIG. 8 is a plan view of an image showing pixels labeled with different labeling numbers for identifying a plurality of tip areas. Then, the device performs process for separating respective tip areas on an image data after completing diminishing process. In FIG. 8, three tip areas are identified by reference numbers 1, 2, and 3). Kawashima fails to teach, a control unit that generates a biometric information acquisition screen including the captured image and a second fingertip image disposed outside a first region including the fingertip region of the fingertip and obtained by enlarging the first fingertip image, and outputs the biometric information acquisition screen to a monitor; as claimed. Riddle teaches a biometric information acquisition assistance device comprising: an acquisition unit that acquires a captured image of at least one fingertip captured in a contactless state (para [0047] Consequently, FIG. 1 illustrates system processing of an embodiment of a system employed to capture fingerprints using a contactless means.; para [0048] A biometric data acquisition sensor included in embodiment of system is triggered by an ultrasonic range finder which cues the system to the presence of an object (e.g., a hand) in the biometric data acquisition sensor's field of view (FOV). The FOV is primarily determined by the biometric data acquisition sensor used ); and a control unit that generates a biometric information acquisition screen including the captured image and a second fingertip image disposed outside a first region including the fingertip region of the fingertip and obtained by enlarging the first fingertip image, and outputs the biometric information acquisition screen to a monitor (para [0071] With reference now to FIG. 9, shown is a graphical user interface (GUI) 900 for displaying results of an embodiment of a system and method for extracting 2D fingerprint images from high resolution 3D surface data captured via contactless means. For the results shown in the GUI 900, several sets of whole hand data were collected, each at different resolutions and varying hand poses. In a test of an embodiment of a system and method for extracting 2D fingerprint images from high resolution 3D surface data captured via contactless means, timing numbers were determined for major components of system including hand pose estimation 104 (see also FIGS. 4-5), fingertip ROI extraction 106 (see also FIG. 6), 2D unrolling 108 (see also FIGS. 8A-8D), and fingerprint quality and minutiae extraction 110. Fig 9 shows 3D HAND on the left and 2D fingerprint image to the right. Wherein the fingerprint images to the right are enlarged when compared to the hand image on the left). It would have been obvious to one of ordinary skill in the art before the filing date of present application to have modified the device of Kawashima with extracting and enlarging the fingertip image as taught by Riddle, because enhance fingerprint images which significantly increases fingerprint matching performance, and also reduces user interaction and significantly decreases processing times (Riddle: para [0090]; [0105]) Regarding claim 2, Kawashima teaches the biometric information acquisition assistance device as explained for claim 1 above. Kawashima fails to teach, wherein the detection unit further detects an orientation of the fingertip shown in the captured image, and the control unit rotates the orientation of the second fingertip image to a predetermined orientation and displays the biometric information acquisition screen including the rotated second fingertip image; as claimed. Riddle teaches the biometric information acquisition assistance device, wherein the detection unit further detects an orientation of the fingertip shown in the captured image (para [0061] The location and orientation of each hand component (such as the fingertips) is calculated from the hand model by transforming the origin and orientation of the component's coordinate system to a point and orientation relative to the sensor camera), and the control unit rotates the orientation of the second fingertip image to a predetermined orientation (para [0069] Given the new axes for each regional surface, a transformation is determined that maps the regional surfaces to their new coordinate systems. This transformation rotates each region to the new axes, block 806. Depth differences between ridges and valleys are then extracted from the transformed regional surfaces. FIG. 8C is a diagram graphically illustrating the computation of axes for the regional surfaces 804 and rotating of the regions to the new axes 806.) and displays the biometric information acquisition screen including the rotated second fingertip image (Fig 9). It would have been obvious to one of ordinary skill in the art before the filing date of present application to have modified the device of Kawashima with extracting and enlarging the fingertip image as taught by Riddle, because enhance fingerprint images which significantly increases fingerprint matching performance, and also reduces user interaction and significantly decreases processing times (Riddle: para [0090]; [0105]) Regarding claim 6, Kawashima and Riddle teaches the biometric information acquisition assistance device as explained for claim 1 above. Kawashima and Riddle fails to teach, wherein the control unit determines a magnification of the first fingertip image based on a size outside the first region in a display region of the monitor; as claimed. It would have been obvious to one of ordinary skill in the art before the filing date of present application to have modified the device of Kawashima and Riddle wherein the control unit determines a magnification of the first fingertip image based on a size outside the first region in a display region of the monitor because changes in size/proportion is matter of design choice. (See MPEP: 2144.04 - IV. CHANGES IN SIZE, SHAPE, OR SEQUENCE OF ADDING INGREDIENTS A. Changes in Size/Proportion In re Rose, 220 F.2d 459, 105 USPQ 237 (CCPA 1955) (Claims directed to a lumber package "of appreciable size and weight requiring handling by a lift truck" were held unpatentable over prior art lumber packages which could be lifted by hand because limitations relating to the size of the package were not sufficient to patentably distinguish over the prior art.); In re Rinehart, 531 F.2d 1048, 189 USPQ 143 (CCPA 1976) ("mere scaling up of a prior art process capable of being scaled up, if such were the case, would not establish patentability in a claim to an old process so scaled." 531 F.2d at 1053, 189 USPQ at 148.). In Gardner v. TEC Syst., Inc., 725 F.2d 1338, 220 USPQ 777 (Fed. Cir. 1984), cert. denied, 469 U.S. 830, 225 USPQ 232 (1984), the Federal Circuit held that, where the only difference between the prior art and the claims was a recitation of relative dimensions of the claimed device and a device having the claimed relative dimensions would not perform differently than the prior art device, the claimed device was not patentably distinct from the prior art device.). Furthermore, It would have been obvious to one of ordinary skill in the art before the filing date of present application to changes the magnification will not change the functionality of the device and it would have been obvious to one of ordinary skill in the art before the filing date of present application one skilled in the art to take into consideration the remaining portion of the screen, such that it would result in optimized use of the remaining screen space. Regarding claim 7, Kawashima and Riddle teaches the biometric information acquisition assistance device as explained for claim 1 above. Kawashima and Riddle fails to teach, wherein the control unit determines a magnification of the first fingertip image based on a size of a second region, which is outside the first region in a display region of the monitor and has a magnification of one times or more of the magnification of the first fingertip image, and generates the biometric information acquisition screen on which the second fingertip image is disposed in the second region; as claimed. It would have been obvious to one of ordinary skill in the art before the filing date of present application to have modified the device of Kawashima and Riddle wherein the control unit determines a magnification of the first fingertip image based on a size of a second region, which is outside the first region in a display region of the monitor and has a magnification of one times or more of the magnification of the first fingertip image, and generates the biometric information acquisition screen on which the second fingertip image is disposed in the second region because changes in size/proportion is matter of design choice. (See MPEP: 2144.04 - IV. CHANGES IN SIZE, SHAPE, OR SEQUENCE OF ADDING INGREDIENTS A. Changes in Size/Proportion In re Rose, 220 F.2d 459, 105 USPQ 237 (CCPA 1955) (Claims directed to a lumber package "of appreciable size and weight requiring handling by a lift truck" were held unpatentable over prior art lumber packages which could be lifted by hand because limitations relating to the size of the package were not sufficient to patentably distinguish over the prior art.); In re Rinehart, 531 F.2d 1048, 189 USPQ 143 (CCPA 1976) ("mere scaling up of a prior art process capable of being scaled up, if such were the case, would not establish patentability in a claim to an old process so scaled." 531 F.2d at 1053, 189 USPQ at 148.). In Gardner v. TEC Syst., Inc., 725 F.2d 1338, 220 USPQ 777 (Fed. Cir. 1984), cert. denied, 469 U.S. 830, 225 USPQ 232 (1984), the Federal Circuit held that, where the only difference between the prior art and the claims was a recitation of relative dimensions of the claimed device and a device having the claimed relative dimensions would not perform differently than the prior art device, the claimed device was not patentably distinct from the prior art device.). Furthermore, It would have been obvious to one of ordinary skill in the art before the filing date of present application to changes the magnification will not change the functionality of the device and it would have been obvious to one of ordinary skill in the art before the filing date of present application one skilled in the art to take into consideration the remaining portion of the screen, such that it would result in optimized use of the remaining screen space. Regarding claim 8, Kawashima teaches the biometric information acquisition assistance device as explained for claim 1 above. Kawashima fails to teach, wherein the first region includes the fingertip regions of all of the fingertips detected by the detection unit; as claimed. Riddle teaches the biometric information acquisition assistance device, wherein the first region includes the fingertip regions of all of the fingertips detected by the detection unit (Fig 9 on the right side shows first region which includes fingertip regions of all of the fingertips). It would have been obvious to one of ordinary skill in the art before the filing date of present application to have modified the device of Kawashima with extracting and enlarging the fingertip image as taught by Riddle, because enhance fingerprint images which significantly increases fingerprint matching performance, and also reduces user interaction and significantly decreases processing times (Riddle: para [0090]; [0105]) Regarding claim 9, Kawashima and Riddle teaches the biometric information acquisition assistance device as explained for claim 1 above Kawashima and Riddle fails to teach, wherein the control unit superimposes a frame line indicating the fingertip region of the fingertip corresponding to the second fingertip image among the fingertips shown in the captured image; as claimed. However, Riddle further teaches wherein the control unit superimposes a description of particular finger indicating the fingertip region of the fingertip corresponding to the second fingertip image among the fingertips shown in the captured image (Fig 9 shows each fingertip to the right in the box is labelled with text describing which finger (thumb, index finger, middle finger, ring finger, little finger)). It would have been obvious to one of ordinary skill in the art before the filing date of present application to have modified the device of Kawashima and Riddle to provide illustration in other forms such as superimposes a frame line indicating the fingertip region of the fingertip corresponding to the second fingertip image among the fingertips shown in the captured image and given there are plurality of fingertip on the right side additional illustration in form for line connecting each fingertip such that the user can easily identify which fingertip is associated with which finger; thus providing indented result of showing corresponding relation between images for clear understanding. Regarding claim 10, Kawashima and Riddle teaches the biometric information acquisition assistance device as explained for claim 1 above. Kawashima and Riddle fails to teach, wherein the control unit superimposes frame lines indicating fingertip regions of a plurality of fingertips shown in the captured image detected by the detection unit, and highlights a frame line corresponding to the second fingertip image among the plurality of superimposed frame lines; as claimed. However, Riddle further teaches superimposes a description of particular finger indicating the fingertip region of the fingertip corresponding to the second fingertip image among the fingertips shown in the captured image (Fig 9 shows each fingertip to the right in the box is labelled with text describing which finger (thumb, index finger, middle finger, ring finger, little finger)). It would have been obvious to one of ordinary skill in the art before the filing date of present application to have modified the device of Kawashima and Riddle to provide illustration in other forms such as superimposes frame lines indicating fingertip regions of a plurality of fingertips shown in the captured image detected by the detection unit, and highlights a frame line corresponding to the second fingertip image among the plurality of superimposed frame lines, because this will allow the user to easily identify which fingertip is associated with which finger; thus providing indented result of showing corresponding relation between images for clear understanding. Regarding claim13, Kawashima teaches the biometric information acquisition assistance device as explained for claim 1 above. Kawashima fails to teach, wherein the control unit generates a biometric information acquisition screen including a calculated evaluation value, the captured image, and the second fingertip image, and outputs the biometric information acquisition screen to the monitor; as claimed. Riddle teaches the biometric information acquisition assistance device, wherein the control unit generates a biometric information acquisition screen including a calculated evaluation value (para [0048] he 3D data is exploited to accurately estimate the pose of the hand, block 104 and assess the quality of 3D data collected at each fingertip by computing a score based on the percentage of the fingerprint that is visible (fingertip region of interest (ROI) extraction), block 106. All fingertip regions with an acceptable score are unrolled into 2D fingerprint images, block 108 in order to perform comparisons with existing ink rolled database systems. Finally, a fingerprint quality score is computed for each fingerprint image based on the NIST NFIQ algorithm and poor quality images are rejected, block 110.), the captured image, and the second fingertip image, and outputs the biometric information acquisition screen to the monitor (para [0052] An internal processor (not shown) collects the 3D data and range information, executes the disclosed algorithms, and outputs the extracted 2D fingerprint images with quality scores to a display. Fig 9). It would have been obvious to one of ordinary skill in the art before the filing date of present application to have modified the device of Kawashima with extracting and enlarging the fingertip image as taught by Riddle, because enhance fingerprint images which significantly increases fingerprint matching performance, and also reduces user interaction and significantly decreases processing times (Riddle: para [0090]; [0105]) Regarding claim 16, Kawashima teaches a biometric information acquisition assistance method which is performed by a biometric information acquisition assistance device capable of acquiring biometric information from at least one fingertip, the biometric information acquisition assistance method comprising: acquiring a captured image of the at least one fingertip in a contactless state (Fig 2; para [0095] The camera 12b takes and captures an image of the hand H including a finger); detecting the at least one fingertip shown in the captured image (para [0110] The software 103c makes the device to perform as a tip area detecting module, a tip area locating module, and a verifying module. The software 103c performs by using a memory region 1102a of the RAM 1102 for a fingertip point computing treatment. The software 103c performs a binarization of an image of a hand which is taken by the camera 12b. The software 103c performs a determination in which a fingertip position of the actual finger image is determined as a fingertip point.); generating a first fingertip image obtained by cutting out a fingertip region including the detected fingertip (Fig 7; Fig 8; para [0121] FIG. 8 is a plan view of an image showing pixels labeled with different labeling numbers for identifying a plurality of tip areas. Then, the device performs process for separating respective tip areas on an image data after completing diminishing process. In FIG. 8, three tip areas are identified by reference numbers 1, 2, and 3). Kawashima fails to teach, generating a biometric information acquisition screen including the captured image and a second fingertip image disposed outside a first region including the fingertip region of the fingertip and obtained by enlarging the first fingertip image, and outputting the biometric information acquisition screen to a monitor; as claimed. Riddle teaches a biometric information acquisition assistance method: acquiring a captured image of at least one fingertip captured in a contactless state (para [0047] Consequently, FIG. 1 illustrates system processing of an embodiment of a system employed to capture fingerprints using a contactless means.; para [0048] A biometric data acquisition sensor included in embodiment of system is triggered by an ultrasonic range finder which cues the system to the presence of an object (e.g., a hand) in the biometric data acquisition sensor's field of view (FOV). The FOV is primarily determined by the biometric data acquisition sensor used ), and generating a biometric information acquisition screen including the captured image and a second fingertip image disposed outside a first region including the fingertip region of the fingertip and obtained by enlarging the first fingertip image, and outputting the biometric information acquisition screen to a monitor (para [0071] With reference now to FIG. 9, shown is a graphical user interface (GUI) 900 for displaying results of an embodiment of a system and method for extracting 2D fingerprint images from high resolution 3D surface data captured via contactless means. For the results shown in the GUI 900, several sets of whole hand data were collected, each at different resolutions and varying hand poses. In a test of an embodiment of a system and method for extracting 2D fingerprint images from high resolution 3D surface data captured via contactless means, timing numbers were determined for major components of system including hand pose estimation 104 (see also FIGS. 4-5), fingertip ROI extraction 106 (see also FIG. 6), 2D unrolling 108 (see also FIGS. 8A-8D), and fingerprint quality and minutiae extraction 110. Fig 9 shows 3D HAND on the left and 2D fingerprint image to the right. Wherein the fingerprint images to the right are enlarged when compared to the hand image on the left). It would have been obvious to one of ordinary skill in the art before the filing date of present application to have modified the method of Kawashima with extracting and enlarging the fingertip image as taught by Riddle, because enhance fingerprint images which significantly increases fingerprint matching performance, and also reduces user interaction and significantly decreases processing times (Riddle: para [0090]; [0105]) Claim(s) 3-5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kawashima et al. (2010/0194713) in view of Riddle et al. (2016/0180142) as applied to claim 2 above, and further in view of Maalouf et al. (2019/0333479). Regarding claim 3, Kawashima and Riddle teaches the biometric information acquisition assistance device as explained for claim 2 above. Kawashima and Riddle fails to teach, a sensor that detects a gravity direction of the biometric information acquisition assistance device, wherein the predetermined orientation is an orientation of the gravity direction; as claimed. Maalouf teaches an intelligent terminal comprising a display (205; Fig 3) and a sensor (320; Fig 3) that detects a gravity direction of the display (para [0042]), wherein the predetermined orientation is an orientation of the gravity direction (para [0042] The orientation sensor 320 may maintain the image in an upright or constant orientation on the display screen 205 with respect to gravity regardless how much the body 201 and the display screen 205 rotate about an axis to. For example, the display screen 205 displays an image in an upright position relative to the direction of gravity. Para [0071]; para [0103]). It would have been obvious to one of ordinary skill in the art before the filing date of present application to have modified the device of Kawashima and Riddle with the teachings of rotating image as taught by Maalouf, because this modifies the display data to form modified display data based on a display data attribute, an electronic device attribute, and an axial orientation of the display screen, thus providing better user viewing experience. Regarding claim 4, Kawashima teaches the biometric information acquisition assistance device as explained for claim 3 above. Kawashima fails to teach, on the biometric information acquisition screen, short sides of the captured image and the second fingertip image each having a substantially rectangular shape are each disposed parallel to a long side direction of the monitor having a substantially rectangular shape; as claimed. Riddle teaches the biometric information acquisition assistance device wherein on the biometric information acquisition screen (Fig 9; para [0071] With reference now to FIG. 9, shown is a graphical user interface (GUI) 900 for displaying), short sides of the captured image and the second fingertip image each having a substantially rectangular shape are each disposed parallel to a long side direction of the monitor having a substantially rectangular shape (Fig 9). It would have been obvious to one of ordinary skill in the art before the filing date of present application to have modified the device of Kawashima with extracting and enlarging the fingertip image as taught by Riddle, because enhance fingerprint images which significantly increases fingerprint matching performance, and also reduces user interaction and significantly decreases processing times (Riddle: para [0090]; [0105]). Regarding claim 5, Kawashima teaches the biometric information acquisition assistance device as explained for claim 4 above. Kawashima fails to teach, the second fingertip image is not superimposed on the captured image on the biometric information acquisition screen; as claimed. Riddle teaches the biometric information acquisition assistance device wherein the second fingertip image is not superimposed on the captured image on the biometric information acquisition screen (Fig 9). It would have been obvious to one of ordinary skill in the art before the filing date of present application to have modified the device of Kawashima with extracting and enlarging the fingertip image as taught by Riddle, because enhance fingerprint images which significantly increases fingerprint matching performance, and also reduces user interaction and significantly decreases processing times (Riddle: para [0090]; [0105]). Claim(s) 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kawashima et al. (2010/0194713) in view of Riddle et al. (2016/0180142) as applied to claim 1 above, and further in view of Lee et al. (KR 201210071384 A and its corresponding English translation). Regarding claim 11, Kawashima and Riddle teaches the biometric information acquisition assistance device; as explained for claim 11 above. Kawashima and Riddle fails to teach, wherein the detection unit detects whether the fingertip shown in the captured image is of a right hand or a left hand; as claimed. Lee teaches a biometric information acquisition assistance device, wherein the detection unit detects whether the fingertip shown in the captured image is of a right hand or a left hand (page 6, lines 21-23: That is, the control unit 300 determines whether the fingerprint information of the user's left or right hand is collected through the fingerprint information detected by the first sensing unit 210 , and the fingerprint information collected thereafter is the other one.). It would have been obvious to one of ordinary skill in the art before the filing date of present application to have modified the device of Kawashima and Riddle with the teachings of Lee, because this will provide fingerprint recognition device capable of increasing the collection speed of fingerprint information and reducing maintenance costs. Claim(s) 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kawashima et al. (2010/0194713) in view of Riddle et al. (2016/0180142) as applied to claim 1 above, and further in view of Lee et al. (KR 201210071384 A and its corresponding English translation) as applied to claim 11 above, and further in view of Tanabe et al. (2019/0303549). Regarding claim 12, Kawashima, Riddle and Lee teaches the biometric information acquisition assistance device; as explained for claim 11 above. Kawashima, Riddle and Lee fails to teach, when it is determined that the hand corresponding to the fingertip detected by the detection unit does not match a fingertip of a designated hand, the control unit generates a notification requesting imaging of the designated hand, and outputs the notification to the monitor; as claimed. Tanabe teaches an electronic device comprising fingerprint sensor (200; Fig 3) when it is determined that the fingertip detected by the detection unit does not match a fingertip, the control unit generates a notification requesting, and outputs the notification to the monitor (Fig 21, Fig 22, Fig 23, Fig 34; para [0187] When the notification unit gives notice of position deviation, the notification unit may give notice of guide information 653 that guides change of the position of the finger on the detecting surface 201. FIG. 23 and FIG. 24 each illustrate a diagram showing one example of the guide information 653 notified by the display 120, i.e., the guide information 653 displayed by the display 120. FIG. 23 illustrates the guide information 653 when the position of the finger on the detecting surface 201 is deviated toward the left with respect to the reference position. FIG. 24 illustrates the guide information 653 when the position of the finger on the detecting surface 201 is deviated toward the right with respect to the reference position.). It would have been obvious to one of ordinary skill in the art before the filing date of present application to have modified the device of Kawashima, Riddle and Lee with the teachings of Tanabe, because this will provide system wherein the user receive feedback and allow the user to make corrections, thus improving user experience. Furthermore, based on the teachings of Tanabe, it would have been obvious to one of ordinary skill in the art before the filing date of present application to have modified the device of Kawashima, Riddle and Lee, to provide other type of notification to the user such that it will allow user to make suitable correction, such as when it is determined that the hand corresponding to the fingertip detected by the detection unit does not match a fingertip of a designated hand, the control unit generates a notification requesting imaging of the designated hand, and outputs the notification to the monitor; in order to yield predictable results. Allowable Subject Matter Claims 14 and 15 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: Regarding claim 14, prior art of record fails to teach the following claim limitations of “an evaluation unit that calculates an evaluation value indicating whether a fingerprint of the fingertip shown in the first fingertip image is in focus, wherein the control unit outputs the first fingertip image for which the calculated evaluation value is equal to or greater than a threshold value.”; in combination with all other claim limitations. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to PREMAL PATEL whose telephone number is (571)270-5892. The examiner can normally be reached Mon-Fri 8-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, MATTHEW EASON can be reached at 571-270-7230. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PREMAL R PATEL/Primary Examiner, Art Unit 2624
Read full office action

Prosecution Timeline

Jun 25, 2025
Application Filed
Feb 20, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596517
LINKED DISPLAY SYSTEM AND LINKED DISPLAY METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12592200
DISPLAY PANEL AND DISPLAY APPARATUS
2y 5m to grant Granted Mar 31, 2026
Patent 12579846
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND NON-TRANSITORY RECORDING MEDIUM
2y 5m to grant Granted Mar 17, 2026
Patent 12578823
DISPLAY INTERFACE TESTING METHOD AND APPARATUS, STORAGE MEDIUM AND ELECTRONIC DEVICE
2y 5m to grant Granted Mar 17, 2026
Patent 12572220
SYSTEMS AND METHODS FOR MULTI-MODAL INTERACTION ANALYSIS
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
78%
Grant Probability
84%
With Interview (+6.3%)
2y 5m
Median Time to Grant
Low
PTA Risk
Based on 955 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month