Prosecution Insights
Last updated: April 19, 2026
Application No. 18/820,103

SYSTEMS AND METHODS FOR VIRTUAL AND AUGMENTED REALITY

Non-Final OA §103§DP
Filed
Aug 29, 2024
Examiner
PATEL, JITESH
Art Unit
2612
Tech Center
2600 — Communications
Assignee
Magic Leap Inc.
OA Round
1 (Non-Final)
78%
Grant Probability
Favorable
1-2
OA Rounds
2y 2m
To Grant
91%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
312 granted / 398 resolved
+16.4% vs TC avg
Moderate +12% lift
Without
With
+12.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 2m
Avg Prosecution
14 currently pending
Career history
412
Total Applications
across all art units

Statute-Specific Performance

§101
6.2%
-33.8% vs TC avg
§103
61.3%
+21.3% vs TC avg
§102
3.8%
-36.2% vs TC avg
§112
16.6%
-23.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 398 resolved cases

Office Action

§103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-38 of U.S. Patent No. 12112574. Although the claims at issue are not identical, they are not patentably distinct from each other because this application is a continuation of 17/211,502 and this application claims with more words but in a broader manner the invention concisely claimed in 17/211,502. The claims map to each other as follows: Instant Application U.S. Patent No. 12112574 Claim 1 An apparatus for providing an augmented reality experience, comprising: a screen, wherein the screen is at least partially transparent for allowing a user of the apparatus to view an object in an environment surrounding the user; a graphic generator configured to provide a virtual content for display by the screen, wherein the screen is configured to display the virtual content in the virtual space. a space definer configured to obtain an input, and to define a virtual space based on the input, wherein the space definer is configured to obtain the input while the screen is being worn by the user Claim 1 An apparatus for providing a virtual or augmented reality experience, comprising: a screen, wherein the screen is at least partially transparent for allowing a user of the apparatus to view an object in an environment surrounding the user; a surface detector configured to detect a surface of the object; an object identifier configured to (1) obtain an orientation of the surface of the object and/or an elevation of the surface of the object, and (2) after the orientation and/or the elevation is obtained, determine whether the object is a wall, a floor, or a furniture based on the orientation of the surface of the object and/or the elevation of the surface of the object, wherein when the object identifier obtains the orientation and/or the elevation, an identity of the object with which the orientation and/or the elevation is associated is unknown to the object identifier; and a graphic generator configured to generate an identifier for the object for display by the screen, the identifier indicating that the object is the wall, the floor, or the furniture, and wherein a transparent portion of the screen is configured to display the identifier. Claim 13 … a space definer configured to define a virtual space. Claim 21 … the space definer is configured to obtain a user input generated via a controller component Claim 2 The apparatus of claim 1, wherein the space definer is configured to define a virtual wall for the virtual space. Claim 14 The apparatus of claim 13, wherein the space definer is configured to define a virtual wall for the virtual space. Claim 3 The apparatus of claim 2, wherein the virtual wall is offset from a real physical wall in the environment surrounding the user. Claim 15 15. The apparatus of claim 14, wherein the virtual wall is offset from a real physical wall in the environment surrounding the user, the real physical wall being the wall or another wall. Claim 4. The apparatus of claim 3, wherein the virtual wall is aligned with, or intersects, a real physical wall in the environment surrounding the user. Claim 16. The apparatus of claim 14, wherein the virtual wall is aligned with, or intersects, a real physical wall in the environment surrounding the user, the real physical wall being the wall or another wall. Claim 5 The apparatus of claim 4, wherein the screen is configured to display a wall identifier at a location in the screen, such that when the user views the virtual wall, the wall identifier will be in a spatial relationship with respect to the virtual wall. Claim 17. The apparatus of claim 14, wherein the screen is configured to display a wall identifier at a location in the screen, such that when the user views the virtual wall, the wall identifier will be in a spatial relationship with respect to the virtual wall. Claim 6 Claim 18 Claim 7 Claim 19 Claim 8 Claim 20 Claim 9 Claim 21 Claim 10 Claim 22 Claim 11 Claim 23 Claim 12 Claim 24 Claim 13 Claim 25 Claim 14 Claim 26 Claim 15 Claim 27 Claim 16 Claim 30 Claim 17 Claim 31 Claim 18 Claim 32 Claim 19 Claim 38 Claim 20 Similar to claim 1 Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 7-14 and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Chuah et al (US 10937247 B1). Regarding claim 1, Chuah discloses an apparatus for providing an augmented reality experience (Chuah fig. 1 – 102; col. 8, l. 36, “augmented reality room or object capture application (“AR capture application”) 102 may be executed by a processor of a user device”), comprising: a screen, wherein the screen is at least partially transparent for allowing a user of the apparatus to view an object in an environment surrounding the user(Chuah col. 23, l. 66, “user interface screen may also include a substantially horizontal line indication … The horizontal line indication 408 may be presented with a … transparency (based on a partially transparent screen)”); a space definer configured to obtain an input, and to define a virtual space based on the input, wherein the space definer is configured to obtain the input while the screen is being worn by the user (Chuah col. 7, l. 21, “a user device, such as a … headset or head-mounted computing device, eyeglass or eyewear computing device (screen is being worn by the user)”; col. 10, l. 12, “a room modeler 118 (a space definer), which may be an application executed on the user device … The room modeler 118 may generate a three-dimensional model of the room 120 (define a virtual space)”); and a graphic generator configured to provide a virtual content for display by the screen, wherein the screen is configured to display the virtual content in the virtual space (Chuah fig. 1 - a graphic generator; col. 24, l. 56, “user interface screen may also include a visual grid, overlay … indicating the floor plane position (generate an identifier indicating the identification for the object for display by the screen).”). Chuah does not disclose, explicitly in one embodiment, the invention as recited in this claim. However, the combined embodiments, as cited, would have made the claimed invention obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention. This would have been done to implement a system that would enable users to integrate several features disclose in Chuah. See, for example, Chuah, col. 5, l. 62, “certain embodiments may be capable of achieving certain advantages, including some or all of the following: quickly and efficiently defining boundaries, dimensions, and measurements associated with a space, object, or environment, generating and presenting simplified user interfaces for definition of a space, object, or environment, generating and presenting guidance via user interfaces to facilitate intuitive definition of a space …” Regarding claim 2, Chuah discloses the apparatus of claim 1, wherein the space definer is configured to define a virtual wall for the virtual space (Chuah col. 10, l. 12, “a room modeler 118 (a space definer), which may be an application executed on the user device … The room modeler 118 may generate a three-dimensional model of the room 120 (define a virtual space)”). Regarding claim 7, Chuah discloses the apparatus of claim 1, wherein the space definer is configured to define a corner for the virtual space (Chuah col. 24, l. 3, “the horizontal line indication 408 may include various other shapes, sizes, or visual presentations, which may be selected by a user, such as one or more angled indicators to mark a corner (define a corner for the 3D room model/ virtual space)”). Regarding claim 8, Chuah discloses the apparatus of claim 1, wherein the space definer is configured to define a wall edge for the virtual space (Chuah col. 24, l. 37, “horizontal line indication 408 may be aligned with bases or edges of one or more walls (define an edge for the 3D room model/ virtual space)”). Regarding claim 9, The apparatus of claim 1, wherein the space definer is configured to obtain the user input generated via a controller component, the user input indicating a selection of a feature in the environment for defining at least a part of the virtual space (Chuah col. 24, l. 11, “the horizontal line indication 408 may include various other shapes, sizes … which may be selected by a user, such as one or more angled indicators to mark (input indicating a selection of a feature in the environment for defining at least a part of the 3D model/virtual space); Chuah col. 105, l. 52, “The input devices may include … trackballs (an input based on a trackball is interpreted as reading on obtain a user input generated via a controller component)”). Regarding claim 10, The apparatus of claim 9, wherein the feature in the environment comprises a wall, a wall corner, an edge, or any combination of the foregoing (Chuah col. 24, l. 37, “horizontal line indication 408 may be aligned with bases or edges of one or more walls or wall planes to facilitate identification of one or more walls or wall planes, and/or the horizontal line indication 408 may be aligned with tops or edges of one or more walls or wall planes to facilitate identification of one or more ceilings or ceiling planes (a wall, an edge)”. Regarding claim 11, The apparatus of claim 9, wherein the user input indicates a cursor position in the screen (Chuah col. 23, l. 66, “user interface screen may also include a substantially horizontal line indication … the horizontal line indication 408 may include various other shapes, sizes, or visual presentations (user input indicates a cursor position in the screen), which may be selected by a user, such as one or more angled indicators to mark a corner, doorway, or other angled boundary of the room or space”). Regarding claim 12, The apparatus of claim 9, wherein the user input indicates an orientation of the controller component, and wherein the selection of the feature in the environment is based on a direction of pointing by the controller component towards the feature in the environment (Chuah col. 24, l. 11, “the horizontal line indication 408 may include various other shapes, sizes (the selection of the feature in the environment is based on a direction of pointing by the controller component towards the feature in the environment) … which may be selected by a user, such as one or more angled indicators to mark; Chuah col. 105, l. 52, “The input devices may include … trackballs (an input based on a trackball is interpreted as reading on user input indicates an orientation of the controller component)”). Regarding claim 13, Chuah discloses the apparatus of claim 1, further comprising a camera, wherein the apparatus is configured to select a feature in the environment, for defining at least a part of the virtual space, based on a presence of an image of the feature in a camera image provided by the camera, wherein the camera image is the input obtained by the space definer (Chuah fig. 1 – room modeler/space definer obtains an image from the camera/capture app 102; col. 3, l. 45, “the capture of images or pictures of a room or space may be based on imaging data detected by an imaging sensor (a camera)”; col. 4, l. 14, “determining a floor plane or lower boundary of the room or space based on imaging data (select the object for identification based on a presence of an image of the object in a camera image provided by the camera)”). Regarding claim 14, The apparatus of claim 13, wherein the apparatus is configured to select the feature in the environment automatically (Chuah col. 35, l. 56, “FIG. 6C, the floor determination process (automatically select object) may comprise receiving position and orientation data of the user device, receiving imaging data from an imaging sensor of the user device having at least a portion of the floor within a field of view, identifying various features within the imaging data associated with the floor”). Claim 19 recites a method which corresponds to the function performed by the apparatus of claim 1. As such, the mapping and rejection of claim 1 above is considered applicable to the method of claim 19. Claim 20 recites a processor-readable non-transitory medium which corresponds to the function performed by the apparatus of claim 1. As such, the mapping and rejection of claim 1 above is considered applicable to the processor-readable non-transitory medium of claim 20. Additionally Chuah discloses a processor-readable non-transitory medium storing a set of instructions, an execution of which by a processing unit will cause a method to be performed (Chuah col. 107, l. 35). Claims 3-4 are rejected under 35 U.S.C. 103 as being unpatentable over Chuah in view of Loberg et al (US 20180197340 A1). Regarding claim 3, Chuah discloses the apparatus of claim 2, but does not disclose wherein the virtual wall is offset from a real physical wall in the environment surrounding the user. However, Loberg discloses the virtual wall is offset from a real physical wall in the environment surrounding the user (Loberg [0048], “virtual wall 420 may be configured within the mixed-reality environment (exemplary environment surrounding the user) … software application 100 determines that the virtual wall 420 is incorrectly rendered such that the height is not correct (virtual wall is offset from a real physical wall in the environment surrounding the user)”). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Chuah with Loberg to enable a feature for adjusting wall dimensions. This may have been done to provide a realistic environment for user to adjust objects as desired. Regarding claim 4, Chuah in view of Loberg discloses the apparatus of claim 3, wherein the virtual wall is aligned with, or intersects, a real physical wall in the environment surrounding the user (Loberg [0048], “virtual wall 420 may be configured within the mixed-reality environment (exemplary environment surrounding the user) … software application 100 adjusts the height of the virtual wall 420 such that it measures forty-eight inches. (virtual wall is offset from a real physical wall in the environment surrounding the user)”). Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Chuah in view of Loberg and further view of Mount et al (US 11017611 B1). Regarding claim 5, Chuah in view of Loberg discloses the apparatus of claim 4, but does not disclose wherein the screen is configured to display a wall identifier at a location in the screen, such that when the user views the virtual wall, the wall identifier will be in a spatial relationship with respect to the virtual wall. However Mount discloses wherein the screen is configured to display a wall identifier at a location in the screen, such that when the user views the virtual wall, the wall identifier will be in a spatial relationship with respect to the virtual wall (Mount fig. 2; col. 13, l. 21, “grab points 233 (a wall identifier, at a location in the screen, and in a spatial relationship with respect to the virtual wall) may be positioned substantially aligned or flush with respective wall surfaces 206”). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Chuah with Mount to display an identifier along with a virtual wall. This would have been done to enable users to adjust and manipulate virtual walls as desired. Claim 6 rejected under 35 U.S.C. 103 as being unpatentable over Chuah in view of Mount et al (US 11017611 B1). Regarding claim 6, Chuah discloses the apparatus of claim 1, wherein the space definer is configured to define a plurality of virtual walls for the virtual space (Chuah col. 10, l. 12, “a room modeler 118, which may be an application executed on the user device … The room modeler 118 may generate a three-dimensional model of the room 120 (comprising a plurality of virtual walls)”); But does disclose wherein the screen is configured to display wall identifiers for the respective virtual walls. However, Mount discloses the screen is configured to display wall identifiers for the respective virtual walls (Mount fig. 2; col. 13, l. 21, “grab points 233, 235 may be substantially centered at respective interfaces with respective wall surfaces 206, and grab points 233 (the screen is configured to display wall identifiers for the respective virtual walls) may be positioned substantially aligned or flush with respective wall surfaces 206”). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Chuah with Mount to display an identifier along with a virtual wall. This would have been done to enable users to adjust and manipulate virtual walls as desired. Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Chuah in view of Keating et al (US 20140028712 A1). Regarding claim 15, Chuah discloses the apparatus of claim 13, but does not disclose wherein the apparatus is configured to select the feature in response to the feature being presence in a sequence of camera images that includes the camera image within a duration exceeding a time threshold. However, Keating discloses the apparatus is configured to select the feature in response to the feature being presence in a sequence of camera images that includes the camera image within a duration exceeding a time threshold (Keating [0054], “The set of selection criteria comprises at least one of the object being in view of the ARD (exemplary object identifier) for a predetermined period (threshold) of time (object being presence in a sequence of camera images that comprise the camera image within a duration exceeding a time threshold) … ”) It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Chuah with Keating to enable a feature for selecting an object based on a time threshold. This would have been done to ensure that a desired object is selected in an accurate manner and also reducing processing requirements by ensuring that multiple objects don’t selected in a short period of time. Claim 16-17 are rejected under 35 U.S.C. 103 as being unpatentable over Chuah in view of Matsui (US 20130257907 A1). Regarding claim 16, Chuah discloses the apparatus of claim 1, but does not disclose wherein the virtual content is also for interaction by an additional user. However, Matsui discloses wherein the virtual content is also for interaction by an additional user (Matsui fig. 1; [0040], “the mobile phone client device 100A and the mobile phone client device 100B include functions for displaying data on the screen of the display unit 106 … FIG. 1 illustrates how a virtual object Ar1, a virtual object Ar2, and a virtual object Ar3 are displayed on the screens of the respective display units 106 of the mobile phone client device 100A and the mobile phone client device 100B. (provide a virtual content for interaction by an additional user, and wherein the screen is configured to display the virtual content)”) It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Chuah with to enable a feature to present virtual content to multiple users. This would have enhanced Chuah by allowing users to work collaboratively in a virtual environment. Regarding claim 17, Chuah in view of Matsui discloses the apparatus of claim 16, wherein the apparatus is configured to connect the user and the additional user to the virtual space so that the user and the additional user can interact with the virtual content at the virtual space (Matsui fig. 1; [0040], “the mobile phone client device 100A and the mobile phone client device 100B include functions for displaying data on the screen of the display unit 106 … FIG. 1 illustrates how a virtual object Ar1, a virtual object Ar2, and a virtual object Ar3 are displayed on the screens of the respective display units 106 of the mobile phone client device 100A and the mobile phone client device 100B. (provide a virtual content for interaction by the user and an additional user, and wherein the screen is configured to display the virtual content)”). Claim 18 is rejected under 35 U.S.C. 103 as being unpatentable over Chuah in view of Matsui and further view of Latta et al (US 20130196772 A1). Regarding claim 18, Chuah in view of Matsui discloses the apparatus of claim 16, but does not disclose wherein the graphic generator is configured to provide the virtual content for interaction by the user and the additional user in different respective rooms. However, Latta discloses the graphic generator is configured to provide the virtual content for interaction by the user and the additional user in different respective rooms (Latta [0016], “FIG. 1A shows an example first physical space 100 and an example second physical space 101. Physical space 100 may be in a different physical location than physical space 101 (provide the virtual content for interaction by the user and the additional user in different respective rooms).”) It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Chuah further with Latta to enable a feature to present virtual content to multiple users. This would have enhanced Chuah by allowing users to collaborate remotely in a virtual environment. Conclusion See the notice of references cited (PTO-892) for prior art made of record, including art that is not relied upon but considered pertinent to applicant's disclosure. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JITESH PATEL whose telephone number is (571)270-3313. The examiner can normally be reached 8am - 5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Said A. Broome can be reached at (571) 272-2931. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JITESH PATEL/Primary Examiner, Art Unit 2612
Read full office action

Prosecution Timeline

Aug 29, 2024
Application Filed
Feb 20, 2026
Non-Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602866
DIGITAL TWIN AUTHORING AND EDITING ENVIRONMENT FOR CREATION OF AR/VR AND VIDEO INSTRUCTIONS FROM A SINGLE DEMONSTRATION
2y 5m to grant Granted Apr 14, 2026
Patent 12597245
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12586313
DAMAGE DETECTION FROM MULTI-VIEW VISUAL DATA
2y 5m to grant Granted Mar 24, 2026
Patent 12579739
2D CONTROL OVER 3D VIRTUAL ENVIRONMENTS
2y 5m to grant Granted Mar 17, 2026
Patent 12579765
DEFINING AND MODIFYING CONTEXT AWARE POLICIES WITH AN EDITING TOOL IN EXTENDED REALITY SYSTEMS
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
78%
Grant Probability
91%
With Interview (+12.4%)
2y 2m
Median Time to Grant
Low
PTA Risk
Based on 398 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month