Prosecution Insights
Last updated: April 19, 2026
Application No. 18/560,947

INFORMATION PROCESSING DEVICE AND IMAGE GENARATION METHOD

Final Rejection §103§112
Filed
Nov 15, 2023
Examiner
GALKA, LAWRENCE STEFAN
Art Unit
3715
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Sony Interactive Entertainment Inc.
OA Round
2 (Final)
76%
Grant Probability
Favorable
3-4
OA Rounds
2y 11m
To Grant
95%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
649 granted / 851 resolved
+6.3% vs TC avg
Strong +19% interview lift
Without
With
+18.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
28 currently pending
Career history
879
Total Applications
across all art units

Statute-Specific Performance

§101
11.1%
-28.9% vs TC avg
§103
35.3%
-4.7% vs TC avg
§102
25.6%
-14.4% vs TC avg
§112
18.3%
-21.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 851 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment Applicant’s submission of a response on 12/31/25 has been received and considered. In the response, Applicant amended claims 1-11. Therefore, claims 1-11 are pending. In addition, Applicant has provided a revised Title for the specification which is approved for entry. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. Claim 8 recites the limitation "the image generation unit". There is insufficient antecedent basis for this limitation in the claim. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 1-11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Tokunaga et al. (pub. no. 20210124174) in view of Lee et al. (pub. no. 20180005387). Regarding claim 1, Tokunaga discloses an information processing device comprising: one or more processors; and a non-transitory memory storing computer readable instructions that, when executed by the one or more processors (“FIG. 1 shows a configuration example of a calibration system of the present disclosure. A calibration system 11 shown in FIG. 1 is a system made up of a head mounted display (HMD) 31, an information processor 32, and a display device 33 including a display, in which camera parameters of a camera 31a provided in the HMD 31 is estimated (measured) by calibration”, [0049]), cause the one or more processors to at least: estimate at least one of a position and a posture of a head-mounted display (HMD) on a basis of an image obtained by capturing a vicinity of the head-mounted display; and generate a display image to be displayed on a display device different from the head-mounted display (“At this time, the camera 31a of the HMD 31 captures an image of the calibration pattern displayed on the display device 33, and the HMD 31 estimates (measures) the camera parameters of the camera 31a on the basis of the image of the calibration pattern captured by the camera 31a. Also, at the time of executing the calibration process, the HMD 31 of the calibration system 11 simultaneously displays and presents, to the user 21, a VR image as shown by a state St2 in FIG. 2 and an AR image as shown by a state St3 in FIG. 2. Thus, the calibration system 11 executes the calibration process in the background while functioning as a game machine and estimates (measures) the camera parameters of the camera 31a. The camera parameters to be obtained here are, for example, the information of the position and orientation of the setup of the camera 31a starting from the display device 33, information of internal parameters (distortion, focal length, optical center, etc.) of the camera 31a, the information of the position and orientation between the cameras 31a in a case where there is a plurality of cameras 31a, color information including the white balance of the camera 31a, and the like. However, in a case where the camera parameters of the color information including the white balance are estimated (measured), the calibration pattern needs to be, for example, an RGB image. As a result, while the user 21 wears the HMD 31 and enjoys the game using the information processor 32, the calibration can be performed with the user 21 being unaware of the calibration, and the camera parameters can be estimated (measured) appropriately“, [0054] – [0058]), wherein the display image includes a moving image, and estimating at least one of the position and the posture of the HMD includes: capture additional images of the vicinity of the HMD including the display image, and estimate the at least one of the position and the posture of the HMD on the basis of the additional images (“In the above, the example has been described where the HMD 31 adjusts the size and brightness of the calibration pattern displayed on the display unit 113 of the display device 33 in accordance with the positional relationship with the display device 33 and the brightness, to appropriately estimate (measure) the camera parameters of the camera 31a (imaging unit 53) However, when the camera parameters of the camera 31a (imaging unit 53) can be estimated (measured), an image except for the calibration pattern may be displayed on the display device 33 and imaged by the camera 31a (imaging unit 53) for use in calibration. For example, a content image usable for calibration may be set in advance, displayed on the display device 33 at a predetermined timing, and used for calibration at the displayed timing. That is, as shown in the right portion of FIG. 12, it is assumed that content including a moving image in which content images P(t1) to P(t3) are sequentially displayed in time series at timings of times t1 to t3 is displayed on the display unit 113 of the display device 33. In this case, a content image P(t3) displayed beforehand at the timing of time t3 is set as calibration content to be used for calibration, and the position of a predetermined object in the content is used for calibration. In this case, the calibration processing unit 71 previously stores calibration content Pt 51 corresponding to the content image P(t3) as shown in the left portion of FIG. 12. Then, at time t3, the calibration processing unit 71 performs calibration by comparison between predetermined positions OB1 to OB3 of the object in the content image P(t3) in the image captured by the camera 31a (imaging unit 53) and predetermined positions OB11 to OB13 of the object in the calibration content Pt 51 stored correspondingly in advance and estimates (measures) the camera parameters”, [0215] – [0219]). Regarding claim 1, it is noted that Tokunaga does not disclose displaying a frame formed of a picture pattern being a still image. Lee however discloses displaying a frame formed of a picture pattern being a still image (“Video content can be captured and provided in a variety of source formats, resolutions, and aspect ratios. These include, for example, standard definition (SD), high definition (HD), and video graphics array (VGA) formats, along with aspect ratios of 4:3, 16:9, 1.85:1, and 2.39:1, to name just a few. The format in which such content is stored or transmitted, however, may vary from that of the source or provider. For example, older SD video may be re-broadcast as HD, or widescreen content may be recorded on a DVD, both involving format changes. When formats change, scaling algorithms are employed to scale up (or scale down) the video to the expected target resolution. If the aspect ratios of the source and target are different, scaling of the video to fit the target along one dimension will result in either cropping the scaled image or augmenting the image with blank pixels in the other dimension. The latter approach is typically preferred as it retains the entire source content. Generally, the blank pixels appear as bars, either at the sides of the video (pillar-box format) or at the top and bottom of the video (letter-box format). The region of a video frame displaying the actual content (i.e., excluding the blank pixels) is referred to as the active display region”, [0001]; a black border interpreted to be a picture pattern being a still image). Exemplary rationales that may support a conclusion of obviousness include combining prior art elements according to known methods to yield predictable results. Here both Tokunaga and Lee are concerned with systems displaying videos on a display. To add the letter boxing of Lee to the moving image of Tokunaga would be to combine prior art elements according to known methods to yield predictable results. Therefore it would have been obvious to a person having ordinary skill in the art as of the effective filing date of the claimed invention to modify Tokunaga to include the letter boxing as taught by Lee. To do so would allow the display of videos without rescaling regardless of the original resolution. Regarding claim 2, Tokunaga discloses generating a HMD display image, and the moving image displayed inside the frame of the display image includes the HMD display image (“More specifically, the information processor 32 outputs content including a virtual reality (VR) image or an augmented reality (AR) image to the HMD 31 worn by the user 21 for display”, [0050]; “Note that in a case where the AR image is projected and displayed, the display unit 54 may be either a transmissive display or a non-transmissive display or may be a display having a structure that covers either the right or left eye as necessary”, [0100]). Regarding claim 3, the combination of Tokunaga and Lee discloses the picture pattern does not repeat a same pattern at close positions (Lee: [0001]). Regarding claim 4, the combination of Tokunaga and Lee discloses the picture pattern includes including a plurality of corner portions (Lee: [0001]). Regarding claim 5, the combination of Tokunaga and Lee discloses the picture pattern includes a still image on at least one of an upper side, a lower side, a left side, and a right side (Lee: [0001]). Regarding claim 6, Tokunaga discloses generate the display image including the picture pattern while estimating the at least one of the position and the posture of the HMD, and not generate the display image including the picture pattern while estimating the at least one of the position and the posture of the HMD (“In step S13, the calibration processing unit 71 of the control unit 51 analyzes the image captured by the imaging unit 53 and determines whether or not the display device 33 is imaged. The calibration processing unit 71 determines whether or not the display device 33 has been imaged on the basis of, for example, whether or not the shape of the display device 33 is included as a result of detection of an object in the captured image”, [0129]; see also [0226]; if S13 or S153 is “no” then no estimation processing and no image generation will take place). Regarding claim 7, Tokunaga discloses generate the display image including the picture pattern while a user is wearing the head-mounted display on a head portion, and not generate the display image including the picture pattern while the user is not wearing the head-mounted display on the head portion ([0129]; see also [0226]; if S13 or S153 is “no” then no estimation processing and no image generation will take place). Regarding claim 8, Tokunaga discloses a setting unit that sets a picture pattern display function by the image generation unit to be set to one of an enabled status and a disabled status (“For example, the calibration may be started such that the calibration pattern is periodically repeatedly displayed on the display device 33 each time a predetermined time elapses. Further, the calibration may be started at a timing when calibration is required, such as a timing at which a discrepancy occurs between an image to be originally captured by the camera 31a (imaging unit 53) and an image actually captured by the camera 31a (imaging unit 53) from the information of the position and orientation estimated by SLAM based on an image captured by the imaging unit 53 due to deviation in the setup direction of the camera 31a (imaging unit 53) or the like for some reason, for example. Moreover, when the user 21 views the VR image or the AR image to be viewed and feels discomfort in a change in the image with respect to a change in the position or orientation, the calibration may be started at a timing, for example, when the user 21 mutters a word indicating abnormality such as “something wrong” or “deviated,” for example”, [0189] – [0191]). Regarding claim 9, Tokunaga discloses enable the picture pattern display function and generate a display image for display on the head-mounted display, the display image including an option for setting the picture pattern display function to be enabled ([0190] – [0191]). Claim 10 is directed to the method implemented by the device of claim 1 and is rejected for the same reasons as claim. Claim 11 is directed to a program that is implemented by the device of claim 1 and is rejected for the same reasons as claim 1. Response to Arguments On pages 9 & 10, Applicant argues the amended claims overcome the prior art of record because Tokunaga fails to disclose estimating a position and posture of the HMD because Tokunaga only estimates camera calibration. Examiner respectfully disagrees. Tokunaga discloses calculating position and orientation of the camera [0057] which is interpreted to be the position and posture of the HMD. In addition, Applicant argues that Lee teaches away because it considers blank pixels problematic. Examiner respectfully disagrees. Lee thinks that cropping a video is problematic and the use of black borders to prevent cropping. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to LAWRENCE STEFAN GALKA whose telephone number is (571)270-1386. The examiner can normally be reached M-F 6-9 & 12-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David Lewis can be reached at 571-272-7673. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /LAWRENCE S GALKA/Primary Examiner, Art Unit 3715
Read full office action

Prosecution Timeline

Nov 15, 2023
Application Filed
Sep 28, 2025
Non-Final Rejection — §103, §112
Dec 31, 2025
Response Filed
Feb 26, 2026
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12589294
SYSTEMS AND METHODS FOR ELECTRONIC GAME CONTROL WITH VOICE DETECTION AND AUDIO STREAM PROCESSING
2y 5m to grant Granted Mar 31, 2026
Patent 12576334
RECEPTION APPARATUS, TRANSMISSION APPARATUS, AND INFORMATION PROCESSING METHOD
2y 5m to grant Granted Mar 17, 2026
Patent 12569764
INPUT ANALYSIS AND CONTENT ALTERATION
2y 5m to grant Granted Mar 10, 2026
Patent 12569756
CLOUD APPLICATION-BASED DEVICE CONTROL METHOD AND APPARATUS, ELECTRONIC DEVICE AND READABLE MEDIUM
2y 5m to grant Granted Mar 10, 2026
Patent 12573270
CONTROLLING A USER INTERFACE
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
76%
Grant Probability
95%
With Interview (+18.6%)
2y 11m
Median Time to Grant
Moderate
PTA Risk
Based on 851 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month