Prosecution Insights
Last updated: April 19, 2026
Application No. 18/850,681

DISPLAY CONTROL DEVICE, DISPLAY CONTROL METHOD, AND DISPLAY CONTROL PROGRAM

Final Rejection §102
Filed
Sep 25, 2024
Examiner
YODICHKAS, ANEETA
Art Unit
2627
Tech Center
2600 — Communications
Assignee
Sony Group Corporation
OA Round
2 (Final)
71%
Grant Probability
Favorable
3-4
OA Rounds
2y 7m
To Grant
96%
With Interview

Examiner Intelligence

Grants 71% — above average
71%
Career Allow Rate
498 granted / 697 resolved
+9.4% vs TC avg
Strong +24% interview lift
Without
With
+24.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
15 currently pending
Career history
712
Total Applications
across all art units

Statute-Specific Performance

§101
4.6%
-35.4% vs TC avg
§103
41.4%
+1.4% vs TC avg
§102
39.3%
-0.7% vs TC avg
§112
9.8%
-30.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 697 resolved cases

Office Action

§102
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-17 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by U.S. Patent Pub. 2021/0286502 A1 to Lemay et al. As to claim 1, Lemay discloses a display control device, comprising: an acquisition unit configured to acquire first position and posture information of an input apparatus located in a real space (Fig. 1, paragraphs 0048-0062, where input devices (125, 130, 140, 150) acquire eye and hand positions and postures); an extraction unit configured to extract, based on the first position and posture information of the input apparatus, a first part of virtual content in a virtual space, wherein a stereoscopic display stereoscopically displays the virtual content in the real space (Fig. 1, 2 and 7K-7M, paragraphs 0066-0071 and 0173-0179, where controller (110) extracts, based on the position and posture of hand (7038), a part of virtual content, hand (7038’) in a virtual space, via the data obtaining unit (241)); and a generation unit configured to generate video content based on the first part of the virtual content (Fig. 1-3, paragraphs 0061, 0067-0073, where display generation component (120) generates video content). As to claim 2, Lemay discloses the display control device, wherein the extraction unit is further configured to extract the first part of the virtual content at a first angle of view (Fig. 1, 2 and 7A, paragraphs 0066-0071 and 0112-0120, where controller (110) is the extraction unit), and the first angle of view corresponds to a line-of-sight of a user based on the first position and posture information of the input apparatus (Fig. 7A, paragraphs 0112-0120, where virtual content (7100) is displayed at an angle based on user (7202) position and posture). As to claim 3, Lemay discloses the display control device, wherein the extraction unit is further configured to extract a second part of the virtual content with a second angle of view based on the first position and posture information of the input apparatus (Fig. 7A, paragraphs 0112-0120, where virtual content (7102) is displayed and a second angle based on user (7204) position and posture), and the generation unit is further configured to generate the video content corresponding to the second angle of view (Fig. 1-3 and 7A, paragraphs 0112-0120, where CGR content is displayed on display (7102) based on the angle). As to claim 4, Lemay discloses the display control device wherein the extraction unit is further configured to: detect a specific object included in the virtual content; and extract the second part of the virtual content with the second angle of view to include the detected specific object (Fig. 7F and 7G, paragraphs 0149-0150, where object (7014) is detected and is included in the virtual content as virtual object (7014’)). As to claim 5, Lemay discloses the display control device, wherein the extraction unit is further configured to: detect a face of the specific object; and correct the second angle of view include the face of the specific object in the first angle of view (Fig. 7A-7D, paragraphs 0119-0122, where the user’s face (7202, 7204) is detected and displayed based on the angle). As to claim 6, Lemay discloses the display control device, wherein the extraction unit is further configured to: set a gaze point in the virtual content based on the first position and posture information on the input apparatus; and extract a second part of the virtual content based on a second angle of view that connects the line-of-sight of the user and the gaze point (Fig. 1 and 5, paragraphs 0098-0103, where eye tracking device (130) tracks the gaze point of the user’s eyes (592)), and the generation unit is further configured to generate the video content corresponding to the second angle of view (Fig. 1 and 5, paragraphs 0098-0103, where controller (110) generates frames (562) to be displayed on display (510)). As to claim 7, Lemay discloses the display control device, wherein the extraction unit is further configured to: apply, based on the first position and posture information of the input apparatus, a camera parameter to a virtual camera, wherein the virtual camera is in the virtual space, and extract a range of the virtual space corresponding to a second angle of view, wherein the second angle of view is associated with the virtual space imaged by the virtual camera (Fig. 7A, paragraph 0117, where cameras (7104, 74016) capture the virtual space). As to claim 8, Lemay discloses the display control device, wherein the extraction unit is further configured to: perform a correction process to include a specific object in the second angle of view; and extract the first part of the virtual content based on the correction process to include an imaging target, wherein the imaging target corresponds to the specific object (Fig. 7F and 7G, paragraphs 0149-0150, where object (7014) is detected and is included in the virtual content as virtual object (7014’)). As to claim 9, Lemay discloses the display control device, wherein the extraction unit is further configured to extract, based on a camera trajectory, the range of the virtual space that is corresponding to the second angle of view (Fig. 7K-7M, paragraphs 0173-0179, where the trajectory of the user (7202) opening the box lid (7042) with hand (7038) is detected such that the virtual lid (7042’) and virtual hand (7038’) are displayed on display (7100)). As to claim 10, Lemay discloses the display control device, wherein the acquired first position and posture information of the input apparatus is detected by a sensor, and the input apparatus includes the sensor (Fig. 1-3, paragraph 0076, where image sensors (314) acquire position and posture information). As to claim 11, Lemay discloses the display control device, wherein a sensor detects the acquired first position and posture information of the input apparatus, and the stereoscopic display includes the sensor (Fig. 1-3 and 7A, paragraphs 0048 and 0193, where input devices (125, 130, 140, 150) acquire position and posture information and is included in stereoscopic display (7100)). As to claim 12, Lemay discloses the display control device, wherein an external device detects the acquired first position and posture information of the input apparatus, and the external device is different from each of the input apparatus, the stereoscopic display, and the display control device (Fig. 1-3 and 5, paragraph 0100, where controller (110) may direct external cameras for capturing the physical environment of the CGR experience to focus in the determined direction). As to claim 13, Lemay discloses the display control device, further comprising a display control unit configured to control a display of the video content on an external display (Fig. 7A, paragraph 0113, where content is displayed on displays (7100, 7202)). As to claim 14, Lemay discloses the display control device, wherein the generation unit is further configured to generate the video content that includes three-dimensional information (Fig. 7A, paragraphs 0112-0119, where three-dimensional information is displayed on displays (7100, 7102)), the display control unit is further configured to control, based on a viewpoint in the virtual space, the display of the video content on the external display, and the viewpoint is set based on the first position and posture information of the input apparatus (Fig. 7A, paragraphs 0112-0119, where three-dimensional information is displayed on displays (7100, 7102) based on the position and postures of users (7202, 7204)). As to claim 15, Lemay discloses the display control device, wherein the acquisition unit is further configured to acquire second position and posture information a plurality of input apparatuses (Fig. 1-3 and 7A, paragraphs 0048 and 0193, where input devices (125, 130, 140, 150) acquire position and posture information), the extraction unit is further configured to extract the first part of the virtual content based on the second position and posture information of each of the plurality of input apparatuses (Fig. 1 and 2, paragraphs 0066-0071, where controller (110) extracts virtual content passed on posture and position information via the data obtaining unit (241)), the generation unit is further configured to generate a plurality of pieces of the video content based on the first part of the virtual content (Fig. 1-3, paragraphs 0061, 0067-0073, where display generation component (120) generates video content), and the display control unit is further configured to display the plurality of pieces of the video content, wherein the plurality of pieces of the video content is switched based on a user input (Fig. 7A-7Q, paragraphs 0205-0206, where the user (7202, 7204) can switch the plurality of pieces of video content based on their interaction with the physical and virtual objects). As to claim 16, Lemay discloses limitations similar to claim 1. As to claim 17, Lemay discloses limitations similar to claim 1. In addition, Lemay discloses a non-transitory computer-readable medium having stored thereon, a computer-executable instructions which when executed by a computer, cause the computer to execute operations (Fig. 2, paragraph 0065, where memory (220) includes a non-transitory computer readable storage medium). Response to Arguments Applicant's arguments filed 8/28/2025 have been fully considered but they are not persuasive. Applicant argues, with respect to claim 1, on pages 9-11, lines 10-1, Lemay fails to disclose, “an extraction unit configured to extract, based on the first position and posture information of the input apparatus, a first part of the virtual content in a virtual space”. Examiner disagrees as Lemay discloses, “an extraction unit configured to extract, based on the first position and posture information of the input apparatus, a first part of the virtual content in a virtual space” (Fig. 1, 2 and 7K-7M, paragraphs 0066-0071 and 0173-0179, where controller (110) extracts, based on the position and posture of hand (7038), a part of virtual content, hand (7038’) in a virtual space, via the data obtaining unit (241)). Applicant argues, with respect to claims 16 and 17, on page 11, lines 1-4, the claims are not anticipated by Lemay for the reasons stated above with regard to claim 1. Examiner disagrees for the reasons stated above. Applicant argues, with respect to claims 2-15, on page 11, lines 5-10, the claims are not anticipated by Lemay based on their dependence on claim 1. Examiner disagrees for the reasons stated above. Applicant argues, with respect to claims 1-7, on page 11, lines 11-13, the claims are in condition for allowance. Examiner disagrees for the reasons stated above. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANEETA YODICHKAS whose telephone number is (571)272-9773. The examiner can normally be reached Monday-Friday 9-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ke Xiao can be reached at 571-272-7776. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. ANEETA YODICHKAS Primary Examiner Art Unit 2627 /ANEETA YODICHKAS/ Primary Examiner, Art Unit 2627
Read full office action

Prosecution Timeline

Sep 25, 2024
Application Filed
May 27, 2025
Non-Final Rejection — §102
Aug 28, 2025
Response Filed
Nov 10, 2025
Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12601909
Electronic Devices and Corresponding Methods for Rendering Content for a Companion Device
2y 5m to grant Granted Apr 14, 2026
Patent 12602154
USER INTERFACES INTEGRATING HARDWARE BUTTONS
2y 5m to grant Granted Apr 14, 2026
Patent 12601921
HEAD-WEARABLE DEVICE FOR VIDEO CAPTURE AND VIDEO STREAMING, AND SYSTEMS AND METHODS OF USE THEREOF
2y 5m to grant Granted Apr 14, 2026
Patent 12597390
PIXEL, DISPLAY DEVICE AND ELECTRONIC DEVICE INCLUDING THE SAME
2y 5m to grant Granted Apr 07, 2026
Patent 12589875
SMART WINDOW SYSTEM
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
71%
Grant Probability
96%
With Interview (+24.5%)
2y 7m
Median Time to Grant
Moderate
PTA Risk
Based on 697 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month