Prosecution Insights
Last updated: April 19, 2026
Application No. 18/842,002

COMPRESSION OF XR DATA META-FRAMES COMMUNICATED THROUGH NETWORKS FOR RENDERING BY XR DEVICES AS AN XR ENVIRONMENT

Non-Final OA §102
Filed
Aug 27, 2024
Examiner
SHANKAR, VIJAY
Art Unit
2624
Tech Center
2600 — Communications
Assignee
Telefonaktiebolaget Lm Ericsson (Publ)
OA Round
1 (Non-Final)
91%
Grant Probability
Favorable
1-2
OA Rounds
2y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 91% — above average
91%
Career Allow Rate
1001 granted / 1101 resolved
+28.9% vs TC avg
Moderate +8% lift
Without
With
+8.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
21 currently pending
Career history
1122
Total Applications
across all art units

Statute-Specific Performance

§101
5.0%
-35.0% vs TC avg
§103
12.4%
-27.6% vs TC avg
§102
44.8%
+4.8% vs TC avg
§112
10.6%
-29.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1101 resolved cases

Office Action

§102
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claims 1 and 12 are objected to because of the following informalities: Claims 1 and 12 are missing period in the end. Appropriate correction is required. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1 and 14 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Rowley (US Patent 10,789,764 B2 in IDS). Regarding Claim 1, Rowley teaches an extended reality, XR, environment server (110 in Fig. 1) for communicating XR data meta-frames through networks to an XR device (same as viewing device 112 in Fig. 1) for rendering as an XR environment (see Figures 3-8; communication between server 110 and viewing device 112 in Fig. 3), the XR environment server comprising: a network interface configured to communicate through the networks (figures 3-8; see link between server 110 and viewing device 112 for transmission of 3D model 111 in Fig. 3); at least one processor (302 in Fig. 3); and at least one memory (304 in Fig. 3) storing instructions (306 in Fig. 3) executable by the at least one processor to perform operations to: obtain input XR data meta-frames which define objects for rendering as the XR environment through the XR device to a user (see Column 16, lines 13-20; " see image feeds F1-F4"; “Rendering and Bokeh” “When system 100 generates viewing experience 200 in real-time based upon image feeds F1-F4, location and movement data feed F5, spectator location and viewing direction data feed F6, and sound feeds F7, latency of system 100 is low to maintain integrity of viewing experience 200, particularly where viewing experience 200 shows augmented reality or extended reality… “); determine relevance of the individual objects to interests of the user of the XR device (column 16, lines 15-17: "location and movement data feed F5, spectator location and viewing direction data feed F6"; column 16, lines 22-23: "determining spectator viewpoint 320 based upon a location of spectator 101 relative to event area 103"; column 20, lines 17-22: "in certain embodiments, instructions 306, when executed by processor 302, control processor 302 to determine occurrence of interest 130 (FIG. 1) based at least in part upon three-dimensional model 111 and virtual camera 606, determining at least an identity and coordinates, relative to three-dimensional model 111, for occurrence of interest 130."); adjust renderable details of the individual objects responsive to the determined relevance to the interests of the user to the individual objects (column 16, lines 38-40: "Bokeh causes blurring of less important portions of an image (e.g., background and/or foreground)"; column 16, lines 44-50: "Bokeh may also highlight the portion of interest (e.g., occurrence of interest 130) to the user within viewing experience 200 since this portion appears in more detail and attracts the attention of the eye of spectator 101, whereas the blurred foreground/background has reduced detail that does not attract the eye's attention."); generate compressed output XR data meta-frames from the input XR data meta- frames based on the adjusted renderable details of the individual objects (column 16, lines 42-44: "fewer pixels need be rendered to generate viewing experience 200 based upon three-dimensional model 111"); and communicate the compressed output XR data meta-frames through the network interface toward the XR device for rendering (see figures 3-8 for communication between server 110 and viewing device 112). Regarding Claim 14, Rowley teaches method for communicating extended reality, XR, meta-frames from an XR environment server (110 in Fig. 1) through a network to an XR device (same as viewing device 112 in Fig. 1) for rendering as an XR environment (see Figures 3-8; communication between server 110 and viewing device 112 in Fig. 3), the method comprising: obtaining input XR data meta-frames which define objects for rendering through the XR device as the XR environment to a user (see Column 16, lines 13-20; " see image feeds F1-F4"; “Rendering and Bokeh” “When system 100 generates viewing experience 200 in real-time based upon image feeds F1-F4, location and movement data feed F5, spectator location and viewing direction data feed F6, and sound feeds F7, latency of system 100 is low to maintain integrity of viewing experience 200, particularly where viewing experience 200 shows augmented reality or extended reality… “); determining relevance of individual objects to interests of the user of the XR device (column 16, lines 15-17: "location and movement data feed F5, spectator location and viewing direction data feed F6"; column 16, lines 22-23: "determining spectator viewpoint 320 based upon a location of spectator 101 relative to event area 103"; column 20, lines 17-22: "in certain embodiments, instructions 306, when executed by processor 302, control processor 302 to determine occurrence of interest 130 (FIG. 1) based at least in part upon three-dimensional model 111 and virtual camera 606, determining at least an identity and coordinates, relative to three-dimensional model 111, for occurrence of interest 130."); adjusting renderable details of the individual objects responsive to the determined relevance to the interests of the user to the individual objects (column 16, lines 38-40: "Bokeh causes blurring of less important portions of an image (e.g., background and/or foreground)"; column 16, lines 44-50: "Bokeh may also highlight the portion of interest (e.g., occurrence of interest 130) to the user within viewing experience 200 since this portion appears in more detail and attracts the attention of the eye of spectator 101, whereas the blurred foreground/background has reduced detail that does not attract the eye's attention."); generating compressed output XR data meta-frames from the input XR data meta-frames based on the adjusted renderable details of the individual objects (column 16, lines 42-44: "fewer pixels need be rendered to generate viewing experience 200 based upon three-dimensional model 111"); and communicating the compressed output XR data meta-frames through the network interface toward the XR device for rendering through the XR device (see figures 3-8 for communication between server 110 and viewing device 112). Allowable Subject Matter Claims 2-13, 15, 17-18, 20, 23-24 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Wetmore, Dolev, Rudman, Berliner and Agrawal all teach the extended reality and an XR environment server through a network to an XR device for rendering. Examiner cites particular columns and line numbers in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner. It is noted that any citation to specific pages, columns, figures, or lines in the prior art references any interpretation of the references should not be considered to be limiting in any way. A reference is relevant for all it contains and may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art. In re Heck, 699 F.2d 1331-33, 216 USPQ 1038-39 (Fed. Cir. 1983) (quoting In re Lemelson, 397 F.2d 1006, 1009, 158 USPQ 275, 277 (CCPA 1968)). Examiner’s Note Examiner has cited particular paragraphs/columns and line numbers or figures in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested from the applicant, in preparing the responses, to fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner. Applicant is reminded that the Examiner is entitled to give the broadest reasonable interpretation to the language of the claims. Furthermore, the Examiner is not limited to Applicant’s definition which is not specifically set forth in the claims. In the case of amending the claimed invention, Applicant is respectfully requested to indicate the portion(s) of the specification which dictate(s) the structure relied on for proper interpretation and also to verify and ascertain the metes and bounds of the claimed invention. Any inquiry concerning this communication or earlier communications from the examiner should be directed to VIJAY SHANKAR whose telephone number is (571)272-7682. The examiner can normally be reached M-F 9 am- 6 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Eason can be reached at 571-270-7230. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. VIJAY SHANKAR Primary Examiner Art Unit 2624 /VIJAY SHANKAR/Primary Examiner, Art Unit 2624
Read full office action

Prosecution Timeline

Aug 27, 2024
Application Filed
Feb 03, 2026
Non-Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603065
DISPLAY SYSTEM, IMAGE OUTPUT APPARATUS, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM STORING PROGRAM
2y 5m to grant Granted Apr 14, 2026
Patent 12597220
INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND COMPUTER PROGRAM
2y 5m to grant Granted Apr 07, 2026
Patent 12592040
METHOD AND APPARATUS FOR METAVERSE PERFORMANCE AUTHORING SYSTEM
2y 5m to grant Granted Mar 31, 2026
Patent 12586509
DISPLAY APPARATUS, DISPLAY DRIVING DEVICE AND DRIVING METHOD
2y 5m to grant Granted Mar 24, 2026
Patent 12585354
Mechanical Force Redistribution Sensor Array Embedded in a Single Support Layer
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
91%
Grant Probability
99%
With Interview (+8.5%)
2y 4m
Median Time to Grant
Low
PTA Risk
Based on 1101 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month