Prosecution Insights
Last updated: April 19, 2026
Application No. 18/242,786

METHOD FOR SUPPORTING MOTION RECOGNITION FOR ROBOT, COMPUTING DEVICE SUPPORTING THE SAME, AND SYSTEM SUPPORTING THE SAME

Final Rejection §103
Filed
Sep 06, 2023
Examiner
CAMMARATA, MICHAEL ROBERT
Art Unit
2667
Tech Center
2600 — Communications
Assignee
Kia Corporation
OA Round
2 (Final)
70%
Grant Probability
Favorable
3-4
OA Rounds
2y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
213 granted / 305 resolved
+7.8% vs TC avg
Strong +36% interview lift
Without
With
+35.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
46 currently pending
Career history
351
Total Applications
across all art units

Statute-Specific Performance

§101
4.5%
-35.5% vs TC avg
§103
45.8%
+5.8% vs TC avg
§102
21.1%
-18.9% vs TC avg
§112
24.6%
-15.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 305 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment The substantive claim amendments filed 28 January 2026 overcome the rejections under 35 USC 112(a) and (b). Response to Arguments Applicant’s arguments with respect to claims 1, 9 and 16 filed 28 January 2026 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. See the application of Nakamura (US 2022/0108468 A1) which is necessitated by the claim amendments and addresses the now-claimed temporal accumulation of the redefined video “image” and amended process steps for each of the plurality of image frames. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 2, 5, 6, 8-10, 13-14, 19-24 are rejected under 35 U.S.C. 103 as being unpatentable over Yamanaka (US 2019/0266734 A1) and Nakamura (US 2022/0108468 A1). Claim 1 In regards to claim 1, Yamanaka discloses a computing device {Fig. 23 and cites below} comprising: memory configured to store {Fig. 23 illustrates a computing device including memory 703, storage 706, CPU 702 and input device 704 which include a camera 111, Figs. 1, 4 capturing image frames of a person 100, [0036], [0045]-[0046], [0126]-[0136]}, wherein the processor {702} is configured to: determine a plurality of joints from each {Fig. 4 including joint recognizing unit 131, 141, [0043]-[0054] and corresponding method including step S502, Fig. 20, [0111]-[0122]}; determine joint data comprising: joint prediction values for predicting whether the plurality of joints correspond to any of a plurality of known joints of the person, wherein the joint prediction values comprise a predicted probability value of each of the plurality of joints, and joint location values corresponding to locations of the plurality of joints {Fig. 4, joint recognizing unit determines for each divided block, the probability value (“prediction”) of whether the joints correspond to known joints, and joint location values. See Figs. 2-4, 11, 20 including the heat map graphically depicting the joint location value and the predicted probability value thereof while Fig. 11 further illustrates template matching employed by the joint recognizing device 131 that determines the claimed correspondence to “any of a plurality of known joints of the person” [0066]-[0069]}; generate, based on the joint data, a virtual joint image comprising coordinate values that correspond to the joint location values {Fig. 3, heat map, [0040]-[0042]. Alternatively, See Figs. 11, 12, 13 estimation value array which is an “virtual joint image”/array of values (joint position probability distribution PhiRGB) indicating coordinate locations and estimation values (“prediction values”) for each of the joints, [0069]-[0077] on a divided block-by-block basis]} wherein the coordinate values, in the virtual joint image, are divided into a plurality of channels; and store the generated virtual joint image in the memory {see the groups of images obtained by division (divided into channels) in Figs. 7, 10-13, [0054], [0068], [0075] which are used to generated the coordinate values in the virtual joint image on a divided block-by-block basis and wherein the virtual joint image (heat map and/or joint position probability distribution PhiRGB and/or probabilities Znm for each block) is also stored and output as per [0096]-[0101], Figs. 4, [0049]-[0051], Fig. 12, [0072]-[0073], Fig. 16, [0097]-[0098], Fig. 19 [0109]. Fig. 17, Fig. 20 including S510} Wherein the processor is configured to determine the joint data by determining, based on the plurality of joints, a plurality of coordinate values for the plurality of joints in each of the joint prediction values corresponding to the plurality of coordinate values, wherein the plurality of coordinate values comprise an x-axis value for each joint of the plurality of joints and a y-axis value for each joint of the plurality of joints {Fig. 4, joint recognizing unit determines for each divided block, the probability value (“joint prediction values”, see 112(b) rejection above) of whether the joints correspond to known joints, and joint coordinate values. See Figs. 2-4, 11 including the heat map graphically depicting the joint location/coordinate values and the probability thereof while Fig. 11 further illustrates template matching employed by the joint recognizing device 131 that determines the claimed correspondence to “any of a plurality of known joints of the person” [0066]-[0069]. See also Figs. 11, 12, 13 estimation value array which is an “virtual joint image”/array of values (joint position probability distribution PhiRGB) indicating coordinate locations and estimation values (“prediction values”) for each of the joints, [0069]-[0077] on a block-by-block basis]}; generate the virtual joint image based on the representative value {Fig. 20, steps S508-S510 and corresponding disclosures}. Nakamura is analogous art from the same field of determining joint locations of a human from image data. See abstract, Figs. 1-6, [0001] and cites below. Nakamura teaches a memory configured to store a plurality of image frames, wherein each of the plurality of image frames comprises an image of a person captured as a subject; and a processor operatively connected with the memory {Fig. 2 illustrates a computer device including memory, processor and parallel channels/pipelines of cameras and processors. As to capturing images of persons see Figs. 3, 9 10A-B and [0075]-[0079].}, wherein the processor is configured to: determine a plurality of joints from each of the plurality of image frames {Figs. 1, 3 including obtaining joint position candidates, heatmap of likelihood of position for body joints and obtaining position candidates for feature points (joints), [0085]-[0088]; [0094]-[0103]} determine the joint data by determining, based on the plurality of joints, a plurality of coordinate values for the plurality of joints in each of the plurality of image frames and the joint prediction values corresponding to the plurality of coordinate values, wherein the plurality of coordinate values comprise an x-axis value for each joint of the plurality of joints and a y-axis value for each joint of the plurality of joints {Fig. 11, [0104]-[0110]; accumulate the determined plurality of coordinate values and the joint prediction values to generate a representative value {see Fig. 1 smoothing processing unit smoothing joint positions and prediction values to generate a representative value. See also Fig. 3 smoothing and kinematics calculations; [0062], [0075]-[0077], [0082], [0111]-[0129]}. It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains to have modified Yamanaka which already determines joint location values and joint data to generate a virtual joint image including for a single image, determining the joint data by determining, based on the plurality of joints, a plurality of coordinate values for the plurality of joints in the image frame and the joint prediction values corresponding to the plurality of coordinate values, wherein the plurality of coordinate values comprise an x-axis value for each joint of the plurality of joints and a y-axis value for each joint of the plurality of joints such that the method is applied to video and also accumulates the determined plurality of coordinate values and the joint prediction values for a plurality of image frames to generate a representative value as taught by Nakamura because Nakamura motivates employing such accumulating the values for a plurality of image frames smoothes temporal variations, because there is a reasonable expectation of success and/or because doing so merely combines prior art elements according to known methods to yield predictable results. Claim 2 In regards to claim 2, Yamanaka discloses wherein the memory stores a joint generation learning model provided to determine the plurality of joints corresponding to the image, and wherein the processor is further configured to: apply the joint generation learning model to the image to determine, based on the {see the joint recognizing unit 131 that uses template matching or a Deep Convolutional Neural Network (DCNN) which is a joint generation learning model that is applied to the image to determine the joints and joint data as claimed. See [0054], Fig. 12, [0071]-[0078]}. Nakamura teaches accumulating the determined plurality of coordinate values and the joint prediction values for a plurality of image frames to generate a representative value {see above mapping for claim 1). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains to have modified Yamanaka which already determines joint location values and joint data to generate a virtual joint image including for a single image, determining the joint data by determining, based on the plurality of joints, a plurality of coordinate values for the plurality of joints in the image frame and the joint prediction values corresponding to the plurality of coordinate values, wherein the plurality of coordinate values comprise an x-axis value for each joint of the plurality of joints and a y-axis value for each joint of the plurality of joints such that the method is applied to video and also accumulates the determined plurality of coordinate values and the joint prediction values for a plurality of image frames to generate a representative value as taught by Nakamura and such that the representative values for the plurality of image frames can be used to apply the joint generation learning model to the image to determine, based on the plurality of image frames, the plurality of joints; and determine the joint data based on the plurality of joints because Nakamura motivates employing such accumulating the values for a plurality of image frames smoothes temporal variations, because there is a reasonable expectation of success and/or because doing so merely combines prior art elements according to known methods to yield predictable results. Claim 5 In regards to claim 5, Yamanaka discloses wherein the processor is configured to generate the virtual joint image by: dividing the joint location values and the joint prediction values based on a location for each body part of the person {see the groups of images obtained by division (divided into channels) in Figs. 7, 10-13, [0054], [0068], [0075] which are used to generated the coordinate values in the virtual joint image on a divided block-by-block basis and wherein the virtual joint image (heat map and/or joint position probability distribution PhiRGB and/or probabilities Znm for each block) is also stored and output as per [0096]-[0101], Figs. 4, [0049]-[0051], Fig. 12, [0072]-[0073], Fig. 16, [0097]-[0098], Fig. 19 [0109]. Fig. 17, Fig. 20 including S510. Note also that the divisions are based on a location for each body part of the person as illustrated in Figs. 7, 10, 11 and 13 and discussed in these figures’ corresponding disclosure sections}; and arranging the divided joint location values and joint prediction values at different locations on a data arrangement diagram {see the estimated skeleton data diagrams on Figs. 1, 2}. Claim 6 In regards to claim 6, Yamanaka discloses wherein the processor is configured to generate the virtual joint image by: dividing the joint location values and the joint prediction values into at least one of: an upper body portion and a lower body portion of the person, or a left body portion and a right body portion of the person with respect to a center line of the person {see the groups of images obtained by division (divided into channels) in Figs. 7, 10-13, [0054], [0068], [0075] which are used to generated the coordinate values in the virtual joint image on a divided block-by-block basis and wherein the virtual joint image (heat map and/or joint position probability distribution PhiRGB and/or probabilities Znm for each block) is also stored and output as per [0096]-[0101], Figs. 4, [0049]-[0051], Fig. 12, [0072]-[0073], Fig. 16, [0097]-[0098], Fig. 19 [0109]. Fig. 17, Fig. 20 including S510. Note also that the divisions are based on a location for each body part of the person as illustrated in Figs. 7, 10, 11 and 13 and discussed in these figures’ corresponding disclosure sections. Note also that the divisions include the specified body portions recited in the claim} Claim 8 In regards to claim 8, Yamanaka discloses further comprising at least one of: a communication interface configured to receive the image from an external electronic device; a camera device configured to capture the image of the person as the subject; or a display configured to output the virtual joint image. {Fig. 23 illustrates a computing device including network connection device 708 (com interface), input device 704 which include a camera 111, Figs. 1, 4, and output device 705 which may be a display, [0129]-[0136]}. Claims 9, 10, 13-14 The rejection of device claims 1, 2, 5, and 6 above applies mutatis mutandis to the corresponding limitations of method claims 9, 10, 13, and 14 respectively while noting that the rejection above cites to both device and method disclosures. Claims 19-20 Yamanaka is not relied upon to disclose but Nakamura teaches (claim 19) wherein each of the plurality of image frames is associated with a different time point at which the image frame is captured and (claim 20) wherein at least one joint of the plurality of joints is moved in the plurality of image frames of a video capturing a movement of the person {see the cites above regarding Nakamura’s video processing wherein a video is commonly defined ad understood to mean a plurality of image frames is associated with a different time point at which the image frame is captured. Moreover, the video captures movement of a person}. It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains to have modified Yamanaka which already determines joint location values and joint data to generate a virtual joint image including for a single image, determining the joint data by determining, based on the plurality of joints, a plurality of coordinate values for the plurality of joints in the image frame and the joint prediction values corresponding to the plurality of coordinate values, wherein the plurality of coordinate values comprise an x-axis value for each joint of the plurality of joints and a y-axis value for each joint of the plurality of joints such that the method is applied to video and also accumulates the determined plurality of coordinate values and the joint prediction values for a plurality of image frames to generate a representative value as taught by Nakamura and wherein each of the plurality of image frames is associated with a different time point at which the image frame is captured and wherein at least one joint of the plurality of joints is moved in the plurality of image frames of a video capturing a movement of the person as also taught by Nakamura because Nakamura motivates employing such accumulating the values for a plurality of image frames smoothes temporal variations, because there is a reasonable expectation of success and/or because doing so merely combines prior art elements according to known methods to yield predictable results. Claims 22 and 23 The rejection of device claims 19 and 20 above applies mutatis mutandis to the corresponding limitations of method claims 22 and 23 while noting that the rejection above cites to both device and method disclosures Claims 21 and 24 In regards to claims 21 and 24, Yamanaka discloses wherein the plurality of coordinate values further comprise a z-axis value for each joint of the plurality of joints {see above cites for claim 1 which include x, y, and z (depth) axis values for each joint}. Nakamura also processing images with depth information while [0079] indicates that a single camera image may contain depth information. Kim also employs x, y, and z coordinates. Claims 7 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Yamanaka and Nakamura as applied to claim 6 above, and further in view of Bell (US 20250200951 A). Claim 7 In regards to claim 7, Yamanaka suggests applying the inventive techniques disclosed therein to motion analysis of human athletes in [0003] but does not disclose the details thereof as expressed in claim 7. Bell is analogous art from the same field of determining joint locations of a human from image data. See Fig. 2 including human skeleton joint extraction 204, [0037]-[0039] that determines joint locations/coordinates. Bell also teaches wherein the processor is further configured to: recognize a motion of the person on the image based on the virtual joint image {Fig. 2 action recognition module 206, [0040], [0043]-[0045]} map a result of recognizing the motion with the image; and store the mapped result in the memory {Figs.4-6, 7A-D and corresponding disclosure sections which maps the recognized motion to a predictive model which may be stored and used for future predictions}. It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains to have modified Yamanaka which already determines joint location values and joint data to generate a virtual joint image such that the virtual joint image further processed to recognize a motion of the person on the image based on the virtual joint image and map a result of recognizing the motion with the image; and store the mapped result in the memory as taught by Bell because Yamanaka suggests/motivates applying his inventive techniques of determining joint locations to motion analysis of human athlete, because there is a reasonable expectation of success and/or because doing so merely combines prior art elements according to known methods to yield predictable results. Claim 15 The rejection of device claim 7 above applies mutatis mutandis to the corresponding limitations of method claim 15 while noting that the rejection above cites to both device and method disclosures. Claims 16-18 are rejected under 35 U.S.C. 103 as being unpatentable over Yamanaka, Nakamura, and Kim (KR-20190130692-A). A marked-up machine translation of Kim has been provided with the first office action, all cross-references are with respect to this translation and the mark-ups are hereby incorporated by reference to further demonstrate claim mapping. Claim 16 In regards to claim 16, Yamanaka discloses a system comprising: a sensor configured to capture {Fig. 23 illustrates a computing device including memory 703, storage 706, CPU 702 and input device 704 which include a camera 111, Figs. 1, 4 capturing an image (composed of a plurality of blocks spatially arranged or composed into a full image) of a person 100, [0036], [0045]-[0046], [0126]-[0136]}, wherein the computing device comprises: a communication interface configured to receive the and wherein the processor is configured to: apply a joint generation learning model, previously stored in the memory, to each of the {see the joint recognizing unit 131 that uses template matching or a Deep Convolutional Neural Network (DCNN) which is a joint generation learning model that is applied to the image to determine the joints and joint data as claimed. See [0054], Fig. 12, [0071]-[0078]}; determine joint data comprising: joint prediction values for predicting whether the plurality of joints correspond to any of a plurality of known joints of the person, wherein the joint prediction values comprise a predicted probability value of each of the plurality of joints, and joint location values corresponding to locations of the plurality of joints {Fig. 4, joint recognizing unit determines for each divided block, the probability value (“prediction”) of whether the joints correspond to known joints, and joint location values. See Figs. 2-4, 11, 20 including the heat map graphically depicting the joint location value and the predicted probability value thereof while Fig. 11 further illustrates template matching employed by the joint recognizing device 131 that determines the claimed correspondence to “any of a plurality of known joints of the person” [0066]-[0069]} generate, based on the joint data, a virtual joint image comprising coordinate values that correspond to the joint location values, wherein the coordinate values, in the virtual joint image, are divided into a plurality of channels {see mapping in claim 1 addressing the same limitations}; Kim is analogous art from the same field of determining joint locations of a human from image data including video. See abstract, Technical Field and cites below. As to video, see input unit 120 inputting video signal, pg. 6 and processor 180 processing video including to generate joint visualization maps, pg. 15 and cites below. Kim also teaches recognizing a motion of the person on the image based on the generated virtual joint image in the video {see behavior recognition method that extracts joint for each frame in a time-series sequence of image frames (aka video) to produce a normalized skeleton node sequence that is then subjected to behavior/motion recognition as per pgs. 3-6}; and provide a result of recognizing the motion to the robot {pg. 6 teaches applying the human motion/recognition results to various fields including robots}; and accumulate the determined plurality of coordinate values and the joint prediction values to generate a representative value {as broadly claimed, such “accumulation” is also met by Kim’s normalization over the time-series sequence of image frames (video). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains to have modified Yamanaka which already determines joint location values and joint data to generate a virtual joint image such that the virtual joint image further processed to recognize a motion of the person on the image based on the virtual joint image and provide a result of recognizing the motion to the robot as taught by Kim because Yamanaka suggests/motivates applying his inventive techniques of determining joint locations to motion analysis of human athlete, because Kim suggests applying motion recognition to robots, because there is a reasonable expectation of success and/or because doing so merely combines prior art elements according to known methods to yield predictable results. Nakamura is analogous art from the same field of determining joint locations of a human from image data. See abstract, Figs. 1-6, [0001] and cites below. Nakamura teaches a memory configured to store a plurality of image frames, wherein each of the plurality of image frames comprises an image of a person captured as a subject; and a processor operatively connected with the memory {Fig. 2 illustrates a computer device including memory, processor and parallel channels/pipelines of cameras and processors. As to capturing images of persons see Figs. 3, 9 10A-B and [0075]-[0079].}, wherein the processor is configured to: determine a plurality of joints from each of the plurality of image frames {Figs. 1, 3 including obtaining joint position candidates, heatmap of likelihood of position for body joints and obtaining position candidates for feature points (joints), [0085]-[0088]; [0094]-[0103]} determine the joint data by determining, based on the plurality of joints, a plurality of coordinate values for the plurality of joints in each of the plurality of image frames and the joint prediction values corresponding to the plurality of coordinate values, wherein the plurality of coordinate values comprise an x-axis value for each joint of the plurality of joints and a y-axis value for each joint of the plurality of joints {Fig. 11, [0104]-[0110]; accumulate the determined plurality of coordinate values and the joint prediction values to generate a representative value {see Fig. 1 smoothing processing unit smoothing joint positions and prediction values to generate a representative value. See also Fig. 3 smoothing and kinematics calculations; [0062], [0075]-[0077], [0082], [0111]-[0129]}. It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains to have modified Yamanaka which already determines joint location values and joint data to generate a virtual joint image including for a single image, determining the joint data by determining, based on the plurality of joints, a plurality of coordinate values for the plurality of joints in the image frame and the joint prediction values corresponding to the plurality of coordinate values, wherein the plurality of coordinate values comprise an x-axis value for each joint of the plurality of joints and a y-axis value for each joint of the plurality of joints such that the method is applied to video and also accumulates the determined plurality of coordinate values and the joint prediction values for a plurality of image frames to generate a representative value as taught by Nakamura because Nakamura motivates employing such accumulating the values for a plurality of image frames smoothes temporal variations, because there is a reasonable expectation of success and/or because doing so merely combines prior art elements according to known methods to yield predictable results. Claims 17 and 18 The rejection of device claims 19 and 20 above applies mutatis mutandis to the corresponding limitations of method claims 17 and 18 while noting that the rejection above cites to both device and method disclosures. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US 20200035021 A1 determines confidence levels for determined joint locations. See [0045]. Tomono US 20190012530 A1 discloses estimating joint positions, relative position scores and generates a “virtual joint image. See Fig. 1 and 6 PNG media_image1.png 800 710 media_image1.png Greyscale PNG media_image2.png 574 650 media_image2.png Greyscale Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Michael R Cammarata whose telephone number is (571)272-0113. The examiner can normally be reached M-Th 7am-5pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Bella can be reached at 571-272-7778. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MICHAEL ROBERT CAMMARATA/Primary Examiner, Art Unit 2667
Read full office action

Prosecution Timeline

Sep 06, 2023
Application Filed
Oct 25, 2025
Non-Final Rejection — §103
Jan 27, 2026
Response Filed
Mar 20, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602797
RECONSTRUCTION OF BODY MOTION USING A CAMERA SYSTEM
2y 5m to grant Granted Apr 14, 2026
Patent 12586171
METHODS AND SYSTEMS FOR GRADING DEVICES
2y 5m to grant Granted Mar 24, 2026
Patent 12579597
Point Group Data Synthesis Apparatus, Non-Transitory Computer-Readable Medium Having Recorded Thereon Point Group Data Synthesis Program, Point Group Data Synthesis Method, and Point Group Data Synthesis System
2y 5m to grant Granted Mar 17, 2026
Patent 12579835
INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND COMPUTER-READABLE RECORDING MEDIUM FOR DISTINGUISHING OBJECT AND SHADOW THEREOF IN IMAGE
2y 5m to grant Granted Mar 17, 2026
Patent 12567283
FACIAL RECOGNITION DATABASE USING FACE CLUSTERING
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
70%
Grant Probability
99%
With Interview (+35.9%)
2y 4m
Median Time to Grant
Moderate
PTA Risk
Based on 305 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month