Prosecution Insights
Last updated: April 19, 2026
Application No. 18/431,821

IMAGE ACQUISITION METHOD AND APPARATUS, COMPUTER DEVICE, AND STORAGE MEDIUM

Non-Final OA §103
Filed
Feb 02, 2024
Examiner
WU, MING HAN
Art Unit
2618
Tech Center
2600 — Communications
Assignee
Tencent Technology (Shenzhen) Company Limited
OA Round
1 (Non-Final)
76%
Grant Probability
Favorable
1-2
OA Rounds
2y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
282 granted / 370 resolved
+14.2% vs TC avg
Strong +23% interview lift
Without
With
+23.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
35 currently pending
Career history
405
Total Applications
across all art units

Statute-Specific Performance

§101
7.8%
-32.2% vs TC avg
§103
68.3%
+28.3% vs TC avg
§102
2.1%
-37.9% vs TC avg
§112
12.6%
-27.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 370 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 2, 4 – 18, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Suzuke et al. (Publication: US 2013/0308834 A1) in view of Watanabe et al. (Publication: US 2018/0164589 A1). Regarding claim 1, see rejection on claim 17. Regarding claim 2, see rejection on claim 18. Regarding claim 4, see rejection on claim 20. Regarding claim 5, see rejection on claim 13. Regarding claim 6, see rejection on claim 14. Regarding claim 7, see rejection on claim 15. Regarding claim 8, see rejection on claim 16. Regarding claim 9, see rejection on claim 17. Regarding claim 10, see rejection on claim 18. Regarding claim 11, see rejection on claim 19. Regarding claim 12, see rejection on claim 20. Regarding claim 13, Suzuki in view of Watanabe disclose all the limitation of claim 9. Suzuki discloses triggering to output first prompt information, the first prompt information being used for prompting the target object to remain the target part in a current state ([0172] Accordingly, the authentication apparatus 20 performs notification such as that on the guidance presentation screen 85. The guidance presentation screen 85 displays image-based guidance presentations 86 and 87, and a message presentation 88 which is a message-based guidance presentation. The guidance presentation 86, which is an image looking at the palm from above, displays the posture of the correct position (standard model) and the posture at the image capturing position in a comparable manner. It suffices that the guidance presentation 86 is useful for grasping the displacement in the horizontal direction, and an image looking at the palm from below may also be used. The guidance presentation 87, which is an image looking at the palm from the side, displays the posture of the correct position and the posture at the image capturing position (estimated from analysis of surface information) in a comparable manner. [0069] The biometric information extraction unit 214 extracts biometric information to be used for matching from the palm image obtained by the surface information analysis unit 210. Specifically, the biometric information extraction unit 214 extracts a vein pattern in the palm image, or information for matching included in the vein pattern. The information for matching includes, for example, characteristics points (edge point or branch point of a vein) included in the vein pattern, the number of veins crossing with a straight line binding a characteristics point and a proximate characteristic point, and a small image centered on a characteristics point. The matching unit 215 compares and performs matching of the biometric information (information for matching) extracted by the biometric information extraction unit 214 with a registered template which has been preliminarily registered. [0173] The message presentation 88 includes a state message 89, a guidance message A90, and a guidance message B91. The state message 89 indicates the posture that caused matching failure, in order to correct the posture of which the user is unaware. For example, the state message 89 indicates that "your finger is slightly bent". The guidance message A90 is a message which alarms the user of the attitude when being photographed, in order to correct the instability of the posture of which the user is unaware. For example, the message A90 provides an instruction such as "please relax". The guidance message B91 is a message which specifically indicates an incorrect posture of the user. For example, the guidance message B91 provides a guidance such as "hold your palm so that the entire palm looks flat when seen from the side".) ; and acquiring the key area image of the target part by using the image acquisition element when the target part remains in the current state ([0119] The entire palm region 68 is a region, of the hand 60, for which the surface information extraction unit 209 obtains luminance information. The entire palm region 68 is a region having a plurality of subregions 69 collected therein. The location of arranging each subregion 69 has been preliminarily set. Each subregion 69 has no mutually overlapping regions and is arranged adjacent to each other. A subregion 69 is a region having a plurality of finer subregions 70 collected therein. The location of arranging each finer subregion 70 has been preliminarily set. Each finer subregion 70 has no mutually overlapping regions and is arranged adjacent to each other. A variation of the evaluation unit of the surface information is different from the second embodiment in that in the variation, the entire palm region 68, the subregion 69, and the finer subregion 70 are respectively different in shape, whereas in the second embodiment, the entire palm region 61, the subregion 62, and the finer subregion 63 are respectively homothetic.). Watanabe discloses promoting the target object to remain the target part in a current state unchanged ([0144] At Step S52, the moving image SC14a, which is one of the elements of the moving image replay screen SC14, is displayed in a reduced size that fits in a space between one side of the display region 21 (a left side in the example in FIG. 12) and one side of the outline of the smartphone A (a left side in the example in FIG. 12) such that an aspect ratio remains unchanged as compared to Step S51.); acquiring remains in the current state unchanged ([0144] At Step S52, the moving image SC14a, which is one of the elements of the moving image replay screen SC14, is displayed in a reduced size that fits in a space between one side of the display region 21 (a left side in the example in FIG. 12) and one side of the outline of the smartphone A (a left side in the example in FIG. 12) such that an aspect ratio remains unchanged as compared to Step S51.) Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Suzuki in view of Watanabe with promoting the target object to remain the target part in a current state unchanged acquiring remains in the current state unchanged as taught by Watanabe. The motivation for doing is to improved operability. Regarding claim 14, Suzuki in view of Watanabe disclose all the limitation of claim 9. Suzuki discloses the currently displayed mapping pattern does not match the preset recognition pattern, triggering to output second prompt information, the second prompt information being used for prompting the target object to adjust the relative position of the target part relative to the image acquisition element, to enable the displayed mapping pattern that changes as the relative position changes to match the preset recognition pattern ([0172] Accordingly, the authentication apparatus 20 performs notification such as that on the guidance presentation screen 85. The guidance presentation screen 85 displays image-based guidance presentations 86 and 87, and a message presentation 88 which is a message-based guidance presentation. The guidance presentation 86, which is an image looking at the palm from above, displays the posture of the correct position (standard model) and the posture at the image capturing position in a comparable manner. It suffices that the guidance presentation 86 is useful for grasping the displacement in the horizontal direction, and an image looking at the palm from below may also be used. The guidance presentation 87, which is an image looking at the palm from the side, displays the posture of the correct position and the posture at the image capturing position (estimated from analysis of surface information) in a comparable manner. It suffices that the guidance presentation 87 is useful for grasping the displacement in the vertical direction. Although a captured image may be used for the guidance presentation 86, it may also be CG (Computer Graphic). The guidance presentation 87 may use CG. The guidance presentations 86 and 87 display the contour of the posture at the correct position with a solid line and the contour of the posture at the image capturing position with a dashed line, for ease of comparison. [0069] The biometric information extraction unit 214 extracts biometric information to be used for matching from the palm image obtained by the surface information analysis unit 210. Specifically, the biometric information extraction unit 214 extracts a vein pattern in the palm image, or information for matching included in the vein pattern. The information for matching includes, for example, characteristics points (edge point or branch point of a vein) included in the vein pattern, the number of veins crossing with a straight line binding a characteristics point and a proximate characteristic point, and a small image centered on a characteristics point. The matching unit 215 compares and performs matching of the biometric information (information for matching) extracted by the biometric information extraction unit 214 with a registered template which has been preliminarily registered. [0173] The message presentation 88 includes a state message 89, a guidance message A90, and a guidance message B91. The state message 89 indicates the posture that caused matching failure, in order to correct the posture of which the user is unaware. For example, the state message 89 indicates that "your finger is slightly bent". The guidance message A90 is a message which alarms the user of the attitude when being photographed, in order to correct the instability of the posture of which the user is unaware. For example, the message A90 provides an instruction such as "please relax". The guidance message B91 is a message which specifically indicates an incorrect posture of the user. For example, the guidance message B91 provides a guidance such as "hold your palm so that the entire palm looks flat when seen from the side".). Regarding claim 15, Suzuki in view of Watanabe disclose all the limitation of claim 9. Suzuki discloses performing object detection and liveness detection on an object within the acquisition range of the image acquisition element ( [0172] Accordingly, the authentication apparatus 20 performs notification such as that on the guidance presentation screen 85. The guidance presentation screen 85 displays image-based guidance presentations 86 and 87, and a message presentation 88 which is a message-based guidance presentation. The guidance presentation 86, which is an image looking at the palm from above, displays the posture of the correct position (standard model) and the posture at the image capturing position in a comparable manner. It suffices that the guidance presentation 86 is useful for grasping the displacement in the horizontal direction, and an image looking at the palm from below may also be used. The guidance presentation 87, which is an image looking at the palm from the side, displays the posture of the correct position and the posture at the image capturing position (estimated from analysis of surface information) in a comparable manner. It suffices that the guidance presentation 87 is useful for grasping the displacement in the vertical direction. Although a captured image may be used for the guidance presentation 86, it may also be CG (Computer Graphic). The guidance presentation 87 may use CG. The guidance presentations 86 and 87 display the contour of the posture at the correct position with a solid line and the contour of the posture at the image capturing position with a dashed line, for ease of comparison. [0173] The message presentation 88 includes a state message 89, a guidance message A90, and a guidance message B91. The state message 89 indicates the posture that caused matching failure, in order to correct the posture of which the user is unaware. For example, the state message 89 indicates that "your finger is slightly bent". The guidance message A90 is a message which alarms the user of the attitude when being photographed, in order to correct the instability of the posture of which the user is unaware. For example, the message A90 provides an instruction such as "please relax". The guidance message B91 is a message which specifically indicates an incorrect posture of the user. For example, the guidance message B91 provides a guidance such as "hold your palm so that the entire palm looks flat when seen from the side". The authentication apparatus 20 may provide audio notification in addition to, or in place of the message presentation 88, guidance message ”liveness detection”.) ; and when it is detected that the object is the target part of the target object and liveness is detected, determining that the target part of the target object triggers the pattern display operation ([0064] The control unit 200 totally controls respective processing units to perform user authentication. The storage unit 201 stores and retains image information obtained from the sensor-unit-embedded mouse 24, various databases, and the like. The notification unit 202 generates and displays on the display 22 desired messages for the user such as guidance about the manner of holding the palm above the sensor-unit-embedded mouse 24, notification of success or failure of the matching, or the like. In addition, the notification unit 202 generates and outputs from a loud speaker (not illustrated), desired audio messages for the user such as guidance about the manner of holding the palm above the sensor-unit-embedded mouse 24 or notification of success or failure of the matching “display”). Regarding claim 16, Suzuki in view of Watanabe disclose all the limitation of claim 9. Suzuki discloses transmitting the acquired key area image to a server, to enable the server to perform identity authentication on the key area image and perform resource transfer when the identity authentication passes ([0061], [0063] - Fig. 2, transmit the image user’s palm form the sensor unit 43 to the authentication apparatus and authentication server. [0059] Here, palm vein authentication in the authentication apparatus 20 is described. A user requesting authentication inputs identification information for identifying the user (e.g., user ID) via the keyboard 23, the sensor-unit-embedded mouse 24, or the IC card reader/writer 25. The authentication apparatus 20 prompts the user to input biometric information by presentation using the display 22. The user inputs biometric information by holding the palm above the sensor-unit-embedded mouse 24. Upon receiving the input of a palm vein image as biometric information, the authentication apparatus 20 performs matching of the input vein image (biometric information) with a registered template. The registered template may be stored in a storage unit of the processing apparatus 21, a storage unit of the authentication server 50, or a storage unit of the IC card 26 of the user.) ; and when the identity authentication passes, receiving a resource transfer result fed back ([0102] [Step S22] Upon receiving the result of successful matching, the processing apparatus 21 determines identity confirmation and, subsequent to performing a desired procedure associated with the successful authentication, terminates the authentication procedure.) Watanabe discloses displaying a resource transfer result fed back by the server ([0054] The storage 9 stores therein various programs and data. The storage 9 may include a non-volatile storage device, such as a flash memory. The programs stored in the storage 9 include a control program 90. The storage 9 may be configured by a combination of a portable storage medium, such as a memory card, and a reading/writing device that performs read and write from and to the storage medium. In this case, the control program 90 may be stored in the storage medium. The control program 90 may be acquired from a server device or another mobile electronic device, such as a smartphone or a watch-type device, by wireless communication or wired communication. [0055] The control program 90 provides functions related to various types of control for operating the wearable device 1. The functions provided by the control program 90 include a function to detect a real object (predetermined object) that is present in scenery in front of the user from a detection result of the detector 5, a function to control displays of the display units 2a and 2b, and the like.). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Suzuki in view of Watanabe with displaying a resource transfer result fed back by the server as taught by Watanabe. The motivation for doing is to improved operability. Regarding claim 17, Suzuki discloses a non-transitory computer-readable storage medium, having a computer program stored thereon that, when executed by a processor of a computer device, causes the computer device to perform an image acquisition method including ( [0054], Fig. 2- The authentication system 10 is a system that recognizes characteristics of living body to identify and authenticate an individual and to acquire the image. The authentication system 10 is configured to include an authentication apparatus 20, an authentication apparatus 30, an authentication apparatus 40, an authentication server 50, and a network 51. [0076] The entirety of the processing apparatus is controlled by a CPU (Central Processing Unit) 101. The CPU 101 has a RAM (Random Access Memory) 102, an HDD (Hard Disk Drive) 103, a communication interface 104, a graphic processing unit 105, and an input-output interface 106 connected thereto via a bus 107. The RAM 102 has programs of the OS (Operating System) executed by the CPU 101 and at least a part of application programs to perform the following:): displaying a mapping pattern corresponding to a key area of a target part of a target object in response to a pattern display operation triggered by the target object ( [0094] The processing apparatus 21 determines whether or not the image of the palm which has been corrected to the correct position is usable for matching. The determination is performed by comparing the image of the palm which has been corrected to the correct position with an image of the registered template, or comparing the image of the palm which has been corrected to the correct position with a model, “mapping pattern”. [0172] the authentication apparatus 20 performs compare notification such as that on the display guidance presentation screen 85. The guidance presentation screen 85 displays image-based guidance presentations 86 and 87, and a message presentation 88 which is a message-based guidance presentation. The guidance presentation 86, which is an image looking at the palm from above, displays the posture of the correct position (standard model) and the posture at the image capturing position in a comparable manner. The guidance presentation 87, which is an image looking at the palm from the side, displays the posture of the correct position and the posture at the image capturing position (estimated from analysis of surface information) in a comparable manner. The guidance presentations 86 and 87 display the contour of the posture at the correct position with a solid line and the contour of the posture at the image capturing position with a dashed line, for ease of comparison, “pattern display matching”. ); changing a display state of the currently displayed mapping pattern when a relative position of the target part relative to an image acquisition element changes ( [0172], [0173] - The guidance presentation screen displays image-based guidance presentations. The changing state message indicates the changing posture that caused matching failure, in order to correct the posture of which the user is unaware. The state message indicates that "your finger is slightly bent". The guidance message A90 is a message which alarms the user of the attitude when being photographed, in order to correct the instability of the posture of which the user is unaware. The message A90 provides an instruction such as "please relax". The guidance message B91 is a message which specifically indicates an incorrect posture of the user. The guidance message B91 provides a guidance such as "hold your palm so that the entire palm looks flat when seen from the side". The authentication apparatus 20 may provide audio notification in place of the message presentation 88 thus “changing a display state” can be read on. [0094], [0172] - determines whether or not the changing image of the palm which has been corrected to the correct position is usable for matching. The determination is performed by comparing, mapping, the image of the palm which has been corrected to the correct position with a model thus "mapping pattern” and can be read on. The guidance presentation screen displays image-based guidance presentations, and a changing message presentation which is a message-based guidance presentation. The guidance presentation, which is an image looking at the palm ”target” from above, displays the posture of the correct position (standard model) and the posture at the image capturing position in a comparable manner thus “changing a display state of the currently displayed mapping pattern” and “changing it when a position of the target part changes” can be read on. [0087] The distance measuring unit obtains information of the distance to the target living body. The sensor unit is able to measure the photographing timing with the distance measuring sensor to photograph the changing position of palm within a predetermined range of distance “a relative position of the target part relative to an image acquisition element changes” can be read on . ) ; and acquiring a key area image of the target part contactlessly by using the image acquisition element ([0064] - The notification unit 202 generates and displays on the display 22 desired messages for the user such as guidance about the manner of holding the palm above the sensor-unit-embedded mouse 24, “contactlessly”. PNG media_image1.png 692 456 media_image1.png Greyscale ); a preset recognition pattern ([0056] The authentication server 50 stores identification information for identifying a user in association with biometric information (template) which is preliminarily registered prior to biometric authentication, “template = present recognition pattern”.). Suzuki does not however Watanabe discloses acquiring an image when the currently displayed matches a pattern ([0123] - The wearable device 1 may detect the right hand R holding the smartphone A from a detection result of the detector 5 or a captured image captured by the imager 3, and may detect movement of the thumb of the right hand R (in other words, the upper limb). Upon detecting the movement of the thumb of the right hand R (upper limb), the wearable device 1 estimates which of the display directions of a plurality of screens SC displayed on the display unit 2 matches the moving direction. The wearable device 1 transmits, to the smartphone A, a signal indicating that the estimated screen SC matching the moving direction has been selected. The smartphone A shifts to the display that is based on the selected screen SC, on the basis of the signal received from the wearable device 1, “shifts=acquiring an image for display based on the matches the moving directon”.). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Suzuki with acquiring an image when the currently displayed matches a pattern as taught by Watanabe. The motivation for doing is to improved operability. Regarding claim 18, Suzuki in view of Watanabe disclose all the limitation of claim 17. Suzuki discloses changing a display position of the currently displayed mapping pattern as a plane position of the key area of the target part changes within an acquisition range of the image acquisition element.( [0094] [Step S14] The processing apparatus 21 determines whether or not the image of the palm which has been corrected to the correct position is usable for matching. The determination is performed by comparing the image of the palm which has been corrected to the correct position with an image of the registered template, or comparing the image of the palm which has been corrected to the correct position with a model. When the processing apparatus 21 determines that the image of the palm which has been corrected to the correct position is usable for matching, the processing apparatus 21 proceeds to step S19, or proceeds to step S15 when the image is determined to be unusable for matching. [0066] From the object determined to be a palm by the palm determination unit 206, the palm clipping unit 207 clips the palm (the fingers and the wrist may be included). The contour correcting unit 208 corrects the position (backward and forward, rightward and leftward positional correction), the size (upward and downward height correction), and orientation (rotational correction) of the clipped palm to the correct position. [0137] [Step S56] When there exists comparative data of the palm image, the surface information correcting unit 211 overviews the entire palm to evaluate the concavity/convexity matching. When the evaluation of concavity/convexity matching falls within a range of a predetermined threshold, the surface information correcting unit 211 terminates the surface information correcting procedure. When, on the other hand, the evaluation of concavity/convexity matching does not fall within a range of the predetermined threshold, the surface information correcting unit 211 proceeds to step S57 “range”.). Watanabe discloses changing a display size of the currently displayed mapping pattern in an opposite direction as a spatial height changes in an opposite direction, when the spatial height of the target part relative to the image acquisition element changes ([0090] - the predetermined space 51a is defined as a three-dimensional space with a depth in the front and back direction of the user (for example, in the Z-axis direction in FIG. 3). It is assumed that the wearable device 1 has activated the detector 5 and a predetermined object in the detection range 51 (not illustrated) is detectable “height, depth is Z axis”. [0160] When the smartphone A moves in a direction away from the moving image SC14a (Step S72), opposite direction, the wearable device 1 detects a distance d by which the smartphone A is separated from the moving image SC14a (or a distance between the smartphone A and the moving image SC14a). If the distance d becomes longer than a predetermined length (in other words, if the smartphone A is separated from the moving image SC14a by a predetermined length or longer), the wearable device 1 changes the size of the moving image SC14a based on the position of the smartphone A at this point of time (Step S73).); and changing position in a same direction as a plane position of the key area of the target part ([0120] At Step S22, the thumb of the right hand R of the user remains in touch with the object OB1 (web browser function) displayed on the smartphone A. If the smartphone A detects an operation of moving the contact position (in other words, sliding) in a direction toward any of the screens SC3 to SC5 displayed by the wearable device 1 (for example, in directions of dotted arrows at Step S22) from the state in which the thumb of the right hand R is in touch with the display position of the object OB1, the smartphone A may perform a process different from the above-described first process. For example, the smartphone A detects an operation of moving the contact position in a direction D1 (leftward direction) toward the screen SC3 displayed by the wearable device 1 from a state in which the thumb of the right hand R is in contact with the display position of the object OB1. [0172] - if a finger in touch with the smartphone A is slid in a direction in which the additional information is displayed, the wearable device 1 recognizes that the additional information is selected.). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Suzuki in view of Watanabe with changing a display size of the currently displayed mapping pattern in an opposite direction as a spatial height changes in an opposite direction, when the spatial height of the target part relative to the image acquisition element changes; and changing position in a same direction as a plane position of the key area of the target part as taught by Watanabe. The motivation for doing is to improved operability. Regarding claim 20, Suzuki in view of Watanabe disclose all the limitation of claim 17. Suzuki discloses performing integrity detection on the target part within the acquisition range of the image acquisition element, to obtain an integrity detection result ([0121] [Step S41] The surface information analysis unit 210 obtains, for each site, surface information (strength/weakness and range of concavity/convexity) extracted by the surface information extraction unit 209. Here, a site, which refers to a region of the palm divided into a plurality of regions, is predefined. Each site is related in association with one or more of the subregions 62 described above. For example, there are five types of parts, i.e., a central part 91 located at the center of the palm of the hand 60, an upper part 95 and a lower part 93 located above and below the central part 91, and a thumb part 94 and a little finger part 92 located at the right and left of the central part 91 (FIG. 13A). Alternatively, each site may be defined in association with the skeleton and muscles of a human, such as being divided into six parts, i.e., the first to fifth metacarpal bones and the carpal bone. [0122] [Step S42] The surface information analysis unit 210 determines, for one of the plurality of sites, whether or not it is a strong convex range or a strong concave range. Determination of whether a site is a strong convex range or a strong concave range is performed by comparison with a predetermined threshold. When the surface information analysis unit 210 determines that either a strong convex range or a strong concave range exists in a site to be determined, the surface information analysis unit 210 proceeds to step S43. When the surface information analysis unit 210 determines that neither a strong convex range nor a strong concave range exists in the site to be determined, the surface information analysis unit 210 proceeds to step S44.); and performing the operation of acquiring a key area image of the target part by using the image acquisition element when it is determined, based on at least one of the movement speed or the integrity detection result, that the detected target part satisfies an acquisition condition ( [0106] [Step S32] The surface information extraction unit 209 divides the entire palm region 61 into a plurality of subregions 62. The location of providing each subregion 62 has been preliminarily set. Each subregion 62 is arranged allowing existence of mutually overlapping regions (see FIG. 8). [0126] [Step S46] the surface information analysis unit 210 determines whether or not the site to be determined is a weak concave range. Determination of being a weak concave range is performed by comparison with a predetermined threshold. The surface information analysis unit 210, when determining that there is a weak concave range in the site to be determined, proceeds to step S47. The surface information analysis unit 210, when determining that there is no weak concave range in the site to be determined, proceeds to step S4). Watanabe discloses obtaining movement speed of the target within the acquisition range ([0009] - If the detector detects that another electronic device has entered the predetermined space while an image is displayed on the display unit, a size of the image is changed in accordance with a movement of a position of the another electronic device that has entered, and if a moving speed of the another electronic device in the predetermined space becomes less than a predetermined value, a change in the size of the image is completed.). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Suzuki in view of Watanabe with obtaining movement speed of the target within the acquisition range as taught by Watanabe. The motivation for doing is to improved operability. Claims 3 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Suzuke et al. (Publication: US 2013/0308834 A1) in view of Watanabe et al. (Publication: US 2018/0164589 A1), Menier et al. (Publication: US 2020/0408519 A1). Regarding claim 3, see rejection on claim 19. Regarding claim 19, Suzuki in view of Watanabe disclose all the limitation of claim 17. Suzuki discloses obtaining at least effective distances corresponding to the key area of the target part by using sensor when the target part is within the acquisition range of the image acquisition element ([0087] The distance measuring unit 24d obtains information of the distance to the target living body. The sensor unit 24a is able to measure the photographing timing with the distance measuring sensor to photograph the palm within a predetermined range of distance. The image capturing unit 24c may perform continuous photographing (e.g., 15 frames per second) at a predetermined timing and use one or more of the captured images for matching.); constructing a plane of the key area based on the at least distances ([0067] - the surface information extraction unit 209 may obtain distance information from the sensor-unit-embedded mouse 24 as information associated with the palm image, and may obtain the distance between a distance measuring sensor and the palm surface as surface information.); determining a relative posture of the key area based on a relative [[angle]] between the plane and a standard plane ( [0064] - The notification unit 202 generates and displays on the display 22 desired messages for the user such as guidance about the manner of holding the palm above the sensor-unit-embedded mouse 24. “standard plane” [0094] [Step S14] The processing apparatus 21 determines whether or not the image of the palm which has been corrected to the correct position is usable for matching. The determination is performed by comparing the image of the palm which has been corrected to the correct position with an image of the registered template, or comparing the image of the palm which has been corrected to the correct position with a model. When the processing apparatus 21 determines that the image of the palm which has been corrected to the correct position is usable for matching, the processing apparatus 21 proceeds to step S19, or proceeds to step S15 when the image is determined to be unusable for matching. [0066] From the object determined to be a palm by the palm determination unit 206, the palm clipping unit 207 clips the palm (the fingers and the wrist may be included). The contour correcting unit 208 corrects the position (backward and forward, rightward and leftward positional correction), the size (upward and downward height correction), and orientation (rotational correction) of the clipped palm to the correct position.); and adjusting the display state of the mapping pattern based on the relative posture ([0173] The message presentation 88 includes a state message 89, a guidance message A90, and a guidance message B91. The state message 89 indicates the posture that caused matching failure, in order to correct the posture of which the user is unaware. For example, the state message 89 indicates that "your finger is slightly bent". The guidance message A90 is a message which alarms the user of the attitude when being photographed, in order to correct the instability of the posture of which the user is unaware. For example, the message A90 provides an instruction such as "please relax". The guidance message B91 is a message which specifically indicates an incorrect posture of the user. For example, the guidance message B91 provides a guidance such as "hold your palm so that the entire palm looks flat when seen from the side". The authentication apparatus 20 may provide audio notification in addition to, or in place of the message presentation 88. [0174] As thus described, since the authentication apparatus 20 provides the user with evaluation of the manner of holding the palm (notification of the state message 89), improvement of the user's learning speed of how to hold the palm may be expected to be enhanced. In addition, since the authentication apparatus 20 alarms the user of the attitude when being photographed (notification of the guidance message A 90), instability of the posture of which the user is unaware is expected to be corrected. In addition, since the authentication apparatus 20 specifically indicates an incorrect posture of the user (notification of the guidance presentation 86, the guidance presentation 87, and the guidance message B 91), the user is expected to appropriately correct the posture.). Suzuke in view of Watanabe do not however Menier discloses determining based on a relative angle between the virtual plane and a standard plane ([0024] said camera of the first pair of cameras and said camera of the second pair of cameras may be aligned in a direction forming an angle lying in the range 5° to 20° with a horizontal plane or with a vertical axis; [0025] said cameras may be distributed around the scene over a virtual surface in the shape of a sphere having a certain radius, it being possible for said certain intra-pair spacing to be such that an algebraic ratio of said certain spacing to said certain radius is less than 0.35; [0026] said inter-pair spacings between two adjacent pairs of cameras may be larger than said certain intra-pair spacing by a factor at least equal to 1.4; [0027] said cameras may be distributed around the scene in such a manner as to form a third pair of cameras having an inter-pair spacing with the second pair of cameras that is greater than or equal to the inter-pair spacing between the second pair of cameras and the first pair of cameras, and the computation device may be configured in such a manner as to apply said stereoscopy digital processing by comparison between the images produced by a camera of the third pair of cameras and the images produced by one of the cameras of the second pair of cameras. by comparison between the images produced by the cameras of said first and third pairs of cameras that are in vertical alignment and/or between the images produced by the cameras of said second and third pairs of cameras, in particular the images produced by the other camera of the second pair of cameras and said camera of the third pair of cameras) obtaining at least three effective distances corresponding to the key area by using three distance sensors ([0057] the local environment of any given camera is defined by the positions, characterized by the directions and the distances of the other cameras in the system relative to said given camera, by considering at least the two cameras closest to said given camera, such as, for example, two cameras, three cameras, four cameras, or up to the number of cameras of the elementary pattern minus one. [0085] The dimensions indicated numerically in FIG. 3C are expressed in meters (m) and correspond to the distances between the planes in which the cameras are situated, for the particular cases when they are arranged at the surface of a virtual surface S having the shape of a sphere of center Cs and of radius Rs of 4 meters, centered on the scene 110, as indicated in FIG. 1B.) constructing a virtual plane based on the at least three effective distances ([0085] The dimensions indicated numerically in FIG. 3C are expressed in meters (m) and correspond to the distances between the planes in which the cameras are situated, for the particular cases when they are arranged at the surface of a virtual surface S having the shape of a sphere of center Cs and of radius Rs of 4 meters, centered on the scene 110, as indicated in FIG. 1B. [0006] In order to be optimal, 3D modelling is considered to require pairs of cameras that are aligned vertically or horizontally, and an angular distance between the cameras of each pair that lies in the range 5° to 15°.) Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Suzuki in view of Watanabe with determining based on a relative angle between the virtual plane and a standard plane; obtaining at least three effective distances corresponding to the key area by using three distance sensors; constructing a virtual plane based on the at least three effective distances as taught by Menier. The motivation for doing is to have optimal 3D modeling. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Ming Wu whose telephone number is (571)270-0724. The examiner can normally be reached on Monday - Friday: 9:30am - 6:00pm EST . Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Devona Faulk can be reached on 571-272-7515. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MING WU/ Primary Examiner, Art Unit 2618
Read full office action

Prosecution Timeline

Feb 02, 2024
Application Filed
Jan 02, 2026
Non-Final Rejection — §103
Apr 09, 2026
Applicant Interview (Telephonic)
Apr 09, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597109
SYSTEMS AND METHODS FOR GENERATING THREE-DIMENSIONAL MODELS USING CAPTURED VIDEO
2y 5m to grant Granted Apr 07, 2026
Patent 12579702
METHOD AND SYSTEM FOR ADAPTING A DIFFUSION MODEL
2y 5m to grant Granted Mar 17, 2026
Patent 12579623
IMAGE PROCESSING METHOD AND APPARATUS, ELECTRONIC DEVICE, AND READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 17, 2026
Patent 12567185
Method and system of creating and displaying a visually distinct rendering of an ultrasound image
2y 5m to grant Granted Mar 03, 2026
Patent 12548202
TEXTURE COORDINATE COMPRESSION USING CHART PARTITION
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
76%
Grant Probability
99%
With Interview (+23.3%)
2y 8m
Median Time to Grant
Low
PTA Risk
Based on 370 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month