Prosecution Insights
Last updated: April 19, 2026
Application No. 18/698,358

INFORMATION PROCESSING DEVICE AND PROGRAM

Final Rejection §102§103
Filed
Apr 04, 2024
Examiner
AGGARWAL, YOGESH K
Art Unit
2637
Tech Center
2600 — Communications
Assignee
Sony Group Corporation
OA Round
2 (Final)
90%
Grant Probability
Favorable
3-4
OA Rounds
2y 7m
To Grant
96%
With Interview

Examiner Intelligence

Grants 90% — above average
90%
Career Allow Rate
998 granted / 1113 resolved
+27.7% vs TC avg
Moderate +7% lift
Without
With
+6.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
32 currently pending
Career history
1145
Total Applications
across all art units

Statute-Specific Performance

§101
5.3%
-34.7% vs TC avg
§103
49.8%
+9.8% vs TC avg
§102
36.4%
-3.6% vs TC avg
§112
5.1%
-34.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1113 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments with respect to claim(s) 1-20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-10, 12-17 and 20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Lee et al. (US PGPUB 20160140394). [Claim 1] An information processing device comprising: Circuitry (Paragraph 24) configured to: determine a composition position of an auxiliary image in a second captured image captured after a first captured image by calculating a predicted position of a subject in the second captured image in accordance with a position and a motion of the subject in the first captured image and determining the predicted position as the composition position (Paragraph 33, figs. 2, 3a and 4a illustrate a position estimation and detection of the target object in a current video frame 202, in accordance with an embodiment. In the illustrated examples, an estimate determined with a motion model predicts the target object has motion vector 312 and will move from a position associated with bounding box 210 within frame 201 to an estimated position associated with bounding box 315 within frame 202. Paragraph 35, Referring again to FIG. 3A the object detection algorithm beginning at the position associated with bounding box 315 iterates until converging to a detected object position 521 associated bounding box 320.); and composite the auxiliary image with the composition position in the second captured image (Paragraph 35, Referring again to FIG. 3A the object detection algorithm beginning at the position associated with bounding box 315 iterates until converging to a detected object position 521 associated bounding box 320). [Claim 2] The information processing device according to claim 1, wherein the composition position determiner circuitry determines the position of the subject on the second captured image as the composition position on a basis of the position of the subject in the first captured image (Paragraph 33, figs. 2, 3a and 4a, FIG. 3A and 4A illustrate a position estimation and detection of the target object in a current video frame 202, in accordance with an embodiment. In the illustrated examples, an estimate determined with a motion model predicts the target object has motion vector 312 and will move from a position associated with bounding box 210 within frame 201 to an estimated position associated with bounding box 315 within frame 202. Paragraph 35, Referring again to FIG. 3A the object detection algorithm beginning at the position associated with bounding box 315 iterates until converging to a detected object position 521 associated bounding box 320) and a difference between imaging timings of the first captured image and the second captured image (Paragraph 25, object tracking entails object detection over a time sequence, through which a temporal sequence of position coordinates associated with motion of the object across consecutive frames of image data is generated. Beyond position, other object features may also be updated as part of a state vector tracking one or more of object size, color texture, shape, etc. In “real-time” visual object tracking, an image data (video) stream is analyzed frame-by-frame concurrently with frame-by-frame generation or receipt of the stream). [Claim 3] The information processing device according to claim 1, wherein the motion of the subject is identified by using a captured image captured before the first captured image (Paragraph 33, , an estimate determined with a motion model predicts the target object has motion vector 312 and will move from a position associated with bounding box 210 within frame 201 to an estimated position associated with bounding box 315 within frame 202). [Claim 4] The information processing device according to claim 1, wherein the auxiliary image is a frame image indicating a specific position of the subject (bounding box 320, fig. 3a). [Claim 5] The information processing device according to claim 4, wherein the frame image is an image indicating a focus position (Paragraph 25, In further embodiments, at least the positional information associated with a tracked object is passed to a 3A (automatic focus, automatic exposure, automatic white balance) engine that manages further processing of the image frame(s)). [Claim 6] The information processing device according to claim 1, wherein the auxiliary image is an image indicating a position of a specific subject recognized as a result of image recognition processing on the first captured image (Paragraph 33, FIG. 3A and 4A illustrate a position estimation and detection of the target object in a current video frame 202, in accordance with an embodiment. In the illustrated examples, an estimate determined with a motion model predicts the target object has motion vector 312 and will move from a position associated with bounding box 210 within frame 201 to an estimated position associated with bounding box 315 within frame 202). [Claim 7] The information processing device according to claim 1, wherein the information processing device serves as a smartphone including an imaging unit that captures the first captured image and the second captured image (Paragraph 52). [Claim 8] The information processing device according to claim 1, wherein the first captured image and the second captured image are preview images displayed on a display unit (Paragraph 58, As further illustrated in FIG. 6, target object data may be output to storage/display/transmission pipeline 695. In one exemplary storage pipeline embodiment, target object data is written to electronic memory 620 (e.g., DDR, etc.) to supplement stored input image data. Memory 620 may be separate or a part of a main memory 610 accessible to APU 650. Alternatively, or in addition, storage/display/transmission pipeline 695 is to transmit target object data and/or input image data off video capture device 503). [Claim 9] This is a computer-readable medium corresponding to apparatus claim 1 and is therefore analyzed and rejected based upon apparatus claim 1. [Claim 10] An information processing device comprising: a first processor (Paragraph 56, DSP 685 and/or applications processor (APU) 650 implements one or more of the validated model object tracking device modules depicted in FIG. 5) that performs image processing on a captured image output from a pixel array (image sensor 659, fig. 6) in which pixels each having a photoelectric conversion element are two- dimensionally arranged (Paragraph 29, The number of pixel values within one frame of image data depends on the input image resolution, which in further embodiments is a function of a local CM. Although embodiments herein are applicable to any input image resolution, in an exemplary embodiment the input image data is at least a 1920×1080 pixel (2.1 megapixel) representation of an image frame (i.e. Full HD)) and processing of determining a composition position of an auxiliary image to be composited with the captured image on a basis of the captured image (Paragraph 33, figs. 2, 3a and 4a, FIG. 3A and 4A illustrate a position estimation and detection of the target object in a current video frame 202, in accordance with an embodiment. In the illustrated examples, an estimate determined with a motion model predicts the target object has motion vector 312 and will move from a position associated with bounding box 210 within frame 201 to an estimated position associated with bounding box 315 within frame 202. Paragraph 35, Referring again to FIG. 3A the object detection algorithm beginning at the position associated with bounding box 315 iterates until converging to a detected object position 521 associated bounding box 320.); and a second processor (Paragraph 58, As further illustrated in FIG. 6, target object data may be output to storage/display/transmission pipeline 695 from DSP 685) that performs processing of displaying, on a display, a composite image obtained by compositing the auxiliary image with the composition position in an image subjected to the image processing, wherein determining the composition position includes calculating a predicted position of a subject in a second captured image captured after a first captured image in accordance with a position and a motion of the subject in the first captured image and determining the predicted position as the composition position (Paragraph 33, figs. 2, 3a and 4a, FIG. 3A and 4A illustrate a position estimation and detection of the target object in a current video frame 202, in accordance with an embodiment. In the illustrated examples, an estimate determined with a motion model predicts the target object has motion vector 312 and will move from a position associated with bounding box 210 within frame 201 to an estimated position associated with bounding box 315 within frame 202. Paragraph 35, Referring again to FIG. 3A the object detection algorithm beginning at the position associated with bounding box 315 iterates until converging to a detected object position 521 associated bounding box 320.). [Claim 12] The information processing device according to claim 10, wherein the first processor executes processing of compositing the auxiliary image with the image subjected to the image processing (Paragraphs 33, 57-59). [Claim 13] The information processing device according to claim 10, wherein the auxiliary image is a frame image indicating a specific position of a subject (bounding box 320, fig. 3a). [Claim 14] The information processing device according to claim 13, wherein the frame image is an image indicating a focus position (Paragraph 25, In further embodiments, at least the positional information associated with a tracked object is passed to a 3A (automatic focus, automatic exposure, automatic white balance) engine that manages further processing of the image frame(s)). [Claim 15] The information processing device according to claim 10, wherein the auxiliary image is an image indicating a position of a specific subject recognized as a result of image recognition processing on the captured image (Paragraph 33, FIG. 3A and 4A illustrate a position estimation and detection of the target object in a current video frame 202, in accordance with an embodiment. In the illustrated examples, an estimate determined with a motion model predicts the target object has motion vector 312 and will move from a position associated with bounding box 210 within frame 201 to an estimated position associated with bounding box 315 within frame 202). [Claim 16] The information processing device according to claim 10, wherein the information processing device serves as a smartphone including the pixel array (Paragraphs 52 and 53). [Claim 17] This is a computer-readable medium corresponding to apparatus claim 10 and is therefore analyzed and rejected based upon apparatus claim 10. [Claim 20] The information processing device according to claim 1, wherein the circuitry is configured to complete determining the composition position of the auxiliary image before completing image processing for generating the second captured image as a preview image (In fig. 2, in the first frame, the composition position of the auxiliary image 210 has been completed before the next frame in fig. 3a). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 11 is rejected under 35 U.S.C. 103 as being unpatentable over Lee et al. (US PGPUB 20160140394) in view of Netsu (US PGPUB 20170223223). [Claim 11] Lee fails to teach wherein the first processor performs the image processing and the processing of determining the composition position in parallel. However Netsu teaches that in addition, image processes, such as color conversion, oblique motion correction, and dust removal, in addition to the image synthesis may be performed either before or after the image synthesis is performed, or may be simultaneously performed (Paragraph 64). Therefore taking the combined teachings of Lee and Netsu, it would be obvious to one skilled in the art before the effective filing date of the invention to have been motivated to have the first processor performs the image processing and the processing of determining the composition position in parallel in order to have a fast process that saves time. Claim(s) 18 is rejected under 35 U.S.C. 103 as being unpatentable over Lee et al. (US PGPUB 20160140394) in view of Karaoguz et al. (US PGPUB 20190333372). [Claim 18] Lee fails to teach wherein the circuitry is configured to measure a display delay time from imaging to display, and calculate the predicted position of the subject after the display delay time. However Karaoguz teaches that the control station 30 comprises a second computer 42 configured to calculate the delay between the display and capture instants and to deduce from this delay, as well as the last measured position and speed received by the communication device 40, an estimated position of each moving object 14 at the display instant (Paragraph 62). Therefore taking the combined teachings of Lee and Karaoguz, it would be obvious to one skilled in the art before the effective filing date of the invention to have been motivated to have to measure a display delay time from imaging to display, and calculate the predicted position of the subject after the display delay time in order to reduce latency that is usually long between the capture instant when a camera captures an image and the display instant when this image is displayed. Claim(s) 19 is rejected under 35 U.S.C. 103 as being unpatentable over Lee et al. (US PGPUB 20160140394) in view of Yamamoto et al. (JP 2018163700, Published on Sep. 3, 2018), US PGPUB 20210306586 is used for translation. [Claim 19] Lee fails to teach wherein the circuitry is configured to determine whether the subject is moving, and execute the calculation of the predicted position only when it is determined that the subject is moving. However Yamamoto based on the determination as to whether an object recognized in the frame 500a is a stationary body or moving body, it is also possible to predict the position of the object 500a in the next frame 500b, enabling restriction of the readout region to be read out next based on the predicted position. Furthermore, at this time, in a case where the recognized object is a moving body, further predicting the speed of the moving body will make it possible to restrict the readout region to be read out next with higher accuracy (Paragraph 581). If the object is stationary, then the position will not change. Therefore taking the combined teachings of Lee and Yamamoto, it would be obvious to one skilled in the art before the effective filing date of the invention to have been motivated to have determine whether the subject is moving, and execute the calculation of the predicted position only when it is determined that the subject is moving in order to read out when the subject is moving thereby saving power. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to YOGESH K AGGARWAL whose telephone number is (571)272-7360. The examiner can normally be reached Monday - Friday 9:30-6. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sinh Tran can be reached at 5712727564. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /YOGESH K AGGARWAL/Primary Examiner, Art Unit 2637
Read full office action

Prosecution Timeline

Apr 04, 2024
Application Filed
Aug 23, 2025
Non-Final Rejection — §102, §103
Nov 28, 2025
Response Filed
Mar 05, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604079
INFORMATION PROCESSING SYSTEM AND INFORMATION PROCESSING METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12604100
IMAGE PROCESSING METHOD AND ELECTRONIC DEVICE
2y 5m to grant Granted Apr 14, 2026
Patent 12598265
COOPERATIVE PHOTOGRAPHING METHOD AND APPARATUS, ELECTRONIC DEVICE, AND COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12587735
IMAGING APPARATUS, METHOD FOR CONTROLLING THE SAME, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12579842
METHOD FOR ADAPTING THE QUALITY AND/OR FRAME RATE OF A LIVE VIDEO STREAM BASED UPON POSE
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
90%
Grant Probability
96%
With Interview (+6.8%)
2y 7m
Median Time to Grant
Moderate
PTA Risk
Based on 1113 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month