Prosecution Insights
Last updated: April 19, 2026
Application No. 18/635,985

METHODS AND SYSTEMS FOR CREATING VIRTUAL AND AUGMENTED REALITY

Non-Final OA §101§103§DP
Filed
Apr 15, 2024
Examiner
NAKHJAVAN, SHERVIN K
Art Unit
2672
Tech Center
2600 — Communications
Assignee
Magic Leap Inc.
OA Round
1 (Non-Final)
88%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 88% — above average
88%
Career Allow Rate
544 granted / 616 resolved
+26.3% vs TC avg
Moderate +11% lift
Without
With
+10.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
23 currently pending
Career history
639
Total Applications
across all art units

Statute-Specific Performance

§101
12.3%
-27.7% vs TC avg
§103
36.4%
-3.6% vs TC avg
§102
25.3%
-14.7% vs TC avg
§112
14.6%
-25.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 616 resolved cases

Office Action

§101 §103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-16 are variously rejected on the ground of nonstatutory double patenting as being unpatentable over claims 6, 7 and 19 of U.S. Patent No. 10,641,603 B2, and claims 1-18 of U.S. Patent No. 11,995,244 B2. Although the claims at issue are not identical, they are not patentably distinct from each other because every feature or elements of the claims of instant Application are variously recited in claims of the patent. Since word “comprising” in claims of the instant Application does not preclude further limitations of the claims of the patents, the claims of the instant Application would be obvious in view of the claims of the patents. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. 35 U.S.C. 101 requires that a claimed invention must fall within one of the four eligible categories of invention (i.e. process, machine, manufacture, or composition of matter) and must not be directed to subject matter encompassing a judicially recognized exception as interpreted by the courts. MPEP 2106. Three categories of subject matter are found to be judicially recognized exceptions to 35 U.S.C. § 101 (i.e. patent ineligible) (1) laws of nature, (2) physical phenomena, and (3) abstract ideas. MPEP 2106(II). To be patent-eligible, a claim directed to a judicial exception must as whole be directed to significantly more than the exception itself. See 2014 Interim Guidance on Patent Subject Matter Eligibility, 79 Fed. Reg. 74618, 74624 (Dec. 16, 2014). Hence, the claim must describe a process or product that applies the exception in a meaningful way, such that it is more than a drafting effort designed to monopolize the exception. Id Claims 1-16 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., an abstract idea) without significantly more. Claim 1 is directed to receiving a plurality of data from respective devices of a plurality of users; updating a virtual world model based on the received plurality of data; and transmitting updated information corresponding to respective portions of the virtual world model to a second user, without additional elements that are sufficient to amount to significantly more than the judicial exception. Specifically, the claim recites receiving a plurality of input from respective devices of a plurality of users, as referring to gathering data under insignificant Extra-solution activity i.e. pre-solution activity (MPEP 2106.05(g)); updating a virtual world model based on the received plurality of input, referring to abstract idea e.g. a mental process by observing, analysis and judgment (mental processes: concept performed in the human mind (including an observation, evaluation, judgment, opinion, see MPEP § 2106.04(a)(2), subsection III)); and transmitting updated information corresponding to respective portions of the virtual world model to a second user, referring to a post-solution activity under insignificant Extra-solution activity, e.g., a printer that is used to output a report of fraudulent transactions, which is recited in a claim to a computer programmed to analyze and manipulate information about credit card transactions in order to detect whether the transactions were fraudulent (MPEP 2106.05(g)). Therefore, claim 1 meets the requirement of step 2A, Prong one of the guidelines for including an abstract idea. Claim 1 further is considered under step 2A, Prong two, for whether any additional elements in the claim integrate the abstract idea into a practical application (MPEP 2106.04(d)). The courts have found limitations that are indicative that an additional element (or combination of elements) may have integrated the exception into a practical application include: • An improvement in the functioning of a computer, or an improvement to other technology or technical field, as discussed in MPEP §§ 2106.04(d)(1) and 2106.05(a); • Applying or using a judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition, as discussed in MPEP § 2106.04(d)(2); • Implementing a judicial exception with, or using a judicial exception in conjunction with, a particular machine or manufacture that is integral to the claim, as discussed in MPEP § 2106.05(b); • Effecting a transformation or reduction of a particular article to a different state or thing, as discussed in MPEP § 2106.05(c); and • Applying or using the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception, as discussed in MPEP § 2106.05(e). The courts have also identified limitations that did not integrate a judicial exception into a practical application: • Merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f); • Adding insignificant extra-solution activity to the judicial exception, as discussed in MPEP § 2106.05(g); and • Generally linking the use of a judicial exception to a particular technological environment or field of use, as discussed in MPEP § 2106.05(h). Bases on the guidelines above, Examiner believes that claim 1 fails to render one or more conditions under the eligible practical applications as outlined by the courts. In fact, Examiner believes that the claim merely reciting an insignificant extra-solution activity to the judicial exception, or generally linking the use of a judicial exception to a particular technological environment or field of use. Therefore, claim 1 fails step 2A, Prong two, for integrating the abstract idea into a practical application. Claim 1 is further considered under step 2B for determining whether the claim amounts to significantly more than the judicial exception. Limitations that the courts have found to qualify as "significantly more" when recited in a claim with a judicial exception include: i. Improvements to the functioning of a computer, e.g., a modification of conventional Internet hyperlink protocol to dynamically produce a dual-source hybrid webpage, as discussed in DDR Holdings, LLC v. Hotels.com, L.P., 773 F.3d 1245, 1258-59, 113 USPQ2d 1097, 1106-07 (Fed. Cir. 2014) (see MPEP § 2106.05(a)); ii. Improvements to any other technology or technical field, e.g., a modification of conventional rubber-molding processes to utilize a thermocouple inside the mold to constantly monitor the temperature and thus reduce under- and over-curing problems common in the art, as discussed in Diamond v. Diehr, 450 U.S. 175, 191-92, 209 USPQ 1, 10 (1981) (see MPEP § 2106.05(a)); iii. Applying the judicial exception with, or by use of, a particular machine, e.g., a Fourdrinier machine (which is understood in the art to have a specific structure comprising a headbox, a paper-making wire, and a series of rolls) that is arranged in a particular way to optimize the speed of the machine while maintaining quality of the formed paper web, as discussed in Eibel Process Co. v. Minn. & Ont. Paper Co., 261 U.S. 45, 64-65 (1923) (see MPEP § 2106.05(b)); iv. Effecting a transformation or reduction of a particular article to a different state or thing, e.g., a process that transforms raw, uncured synthetic rubber into precision-molded synthetic rubber products, as discussed in Diehr, 450 U.S. at 184, 209 USPQ at 21 (see MPEP § 2106.05(c)); v. Adding a specific limitation other than what is well-understood, routine, conventional activity in the field, or adding unconventional steps that confine the claim to a particular useful application, e.g., a non-conventional and non-generic arrangement of various computer components for filtering Internet content, as discussed in BASCOM Global Internet v. AT&T Mobility LLC, 827 F.3d 1341, 1350-51, 119 USPQ2d 1236, 1243 (Fed. Cir. 2016) (see MPEP § 2106.05(d)); or vi. Other meaningful limitations beyond generally linking the use of the judicial exception to a particular technological environment, e.g., an immunization step that integrates an abstract idea of data comparison into a specific process of immunizing that lowers the risk that immunized patients will later develop chronic immune-mediated diseases, as discussed in Classen Immunotherapies Inc. v. Biogen IDEC, 659 F.3d 1057, 1066-68, 100 USPQ2d 1492, 1499-1502 (Fed. Cir. 2011) (see MPEP § 2106.05(e)). Limitations that the courts have found not to be enough to qualify as "significantly more" when recited in a claim with a judicial exception include: i. Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, e.g., a limitation indicating that a particular function such as creating and maintaining electronic records is performed by a computer, as discussed in Alice Corp., 573 U.S. at 225-26, 110 USPQ2d at 1984 (see MPEP § 2106.05(f)); ii. Simply appending well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception, e.g., a claim to an abstract idea requiring no more than a generic computer to perform generic computer functions that are well-understood, routine and conventional activities previously known to the industry, as discussed in Alice Corp., 573 U.S. at 225, 110 USPQ2d at 1984 (see MPEP § 2106.05(d)); iii. Adding insignificant extra-solution activity to the judicial exception, e.g., mere data gathering in conjunction with a law of nature or abstract idea such as a step of obtaining information about credit card transactions so that the information can be analyzed by an abstract mental process, as discussed in CyberSource v. Retail Decisions, Inc., 654 F.3d 1366, 1375, 99 USPQ2d 1690, 1694 (Fed. Cir. 2011) (see MPEP § 2106.05(g)); or iv. Generally linking the use of the judicial exception to a particular technological environment or field of use, e.g., a claim describing how the abstract idea of hedging could be used in the commodities and energy markets, as discussed in Bilski v. Kappos, 561 U.S. 593, 595, 95 USPQ2d 1001, 1010 (2010) or a claim limiting the use of a mathematical formula to the petrochemical and oil-refining fields, as discussed in Parker v. Flook, 437 U.S. 584, 588-90, 198 USPQ 193, 197-98 (1978) (MPEP § 2106.05(h)). Based on the qualifying conditions above, Examiner does not recognize an additional element in the claim that amounts to significantly more than the judicial exception. The claim again seems to be reciting adding insignificant extra-solution activity to the judicial exception, or generally linking the use of the judicial exception to a particular technological environment or field of use. Therefore, the claim fails step 2B of the guidelines, for including additional element that amounts to significantly more than the judicial exception, and hence not eligible under 101. Similar arguments with respect to abstract idea and consideration of the claim under step 2A prong 2 in claim 1 above are applied to the system claim 9 as well. However, claim 9 additionally include an image capturing device to capture an image and a processor to process data that are considered under step 2B for additional elements that amount to significantly more than the judicial exception. With respect to the camera capturing an image, the camera capturing an image is an element that defines mere a well-understood, routine, conventional activity, that does not amount to significantly more than the judicial exception (MPEP 2106.05(d)(I)). Accordingly, claims can recite a mental process even if they are claimed as being performed on a computer (MPEP 2106.04(a)(III). In the instant case by visually viewing the model and mentally based on observation, analysis and judgment, updating the received model. Supreme Court recognized this in Benson, determining that a mathematical algorithm for converting binary coded decimal to pure binary within a computer’s shift register was an abstract idea. The Court concluded that the algorithm could be performed purely mentally even though the claimed procedures "can be carried out in existing computers long in use, no new machinery being necessary." 409 U.S at 67, 175 USPQ at 675. See also Mortgage Grader, 811 F.3d at 1324, 117 USPQ2d at 1699 (concluding that concept of "anonymous loan shopping" recited in a computer system claim is an abstract idea because it could be "performed by humans without a computer"). Therefore, claim 9 fails step 2B of the guidelines for neither the camera nor the processor adding significantly more to the claim. Hence claim 20 is ineligible under 101. Regarding claims 2 and 10, claims recite wherein the virtual world model resides on a networked memory, referring to an element for storing that merely a well-understood, routine, conventional activity, that does not amount to significantly more than the judicial exception under step 2B (MPEP 2106.05(d)(I)). Therefore, claims 2 and 10 are ineligible under 101. Regarding claims 3 and 11, claims recite wherein the first user and the second user are located at respective different locations, that the courts have found to be insignificant extra-solution activity, under electing a particular data source or type of data to be manipulated, e.g. iv. Requiring a request from a user to view an advertisement and restricting public access, Ultramercial, 772 F.3d at 715-16, 112 USPQ2d at 1754. (MPEP 2106.05(g)(3)). Therefore, claims 3 and 11 are ineligible under 101. Regarding claims 4 and 12, claims recite wherein each of the respective devices of the plurality of users is selected from the group consisting of FOV cameras, other cameras, sensors, eye tracking first devices, and audio first devices, referring to gathering data under insignificant extra-solution activity, by combination of elements, as well-understood, routine, conventional activity (MPEP 2106.05(d)(I)(2)). Therefore, claims 4 and 12 are ineligible under 101. Regarding claims 5 and 13, claims recite transmitting the updated information corresponding to the respective portions of the virtual world model to the first user, wherein the updated information indicates whether any portion of the updated information needs to be displayed to the first user, referring to a post-solution activity of distributing of data under insignificant Extra-solution activity, e.g., a printer that is used to output a report of fraudulent transactions, which is recited in a claim to a computer programmed to analyze and manipulate information about credit card transactions in order to detect whether the transactions were fraudulent (MPEP 2106.05(g)). Also, lack functional step of received specific data i.e. the indication data within the broad data is still treated as combine gathered data. Therefore, claims 5 and 13 are ineligible under 101. Regarding claims 6 and 13, claims recite transmitting the updated information corresponding to the respective portions of the virtual world model to the plurality of users, wherein the updated information indicates whether any portion of the updated information needs to be displayed to each of the plurality of users, referring to a post-solution activity of distributing of data under insignificant Extra-solution activity (MPEP 2106.05(g)). Also, lack functional step of received specific data i.e. the indication data within the broad data is still treated as combine gathered data. Therefore, claims 6 and 13 are ineligible under 101. Regarding claims 7, 14 and 15, claims recite receiving second input from a second device of the second user, as further referring to gathering data under insignificant Extra-solution activity (MPEP 2106.05(g)); updating the virtual world model based on the received second input, as further referring to abstract idea e.g. a mental process by observing, analysis and judgment (see MPEP § 2106.04(a)(2), subsection III)); and transmitting second updated information corresponding to a second portion of the virtual world model to the first user, as further referring to a post-solution activity under insignificant Extra-solution activity, of outputting the data. Therefore, claims 7, 14 and 15 are ineligible under 101. Regarding claims 8 and 16, claims recite wherein the second updated information corresponds to movement of an avatar of the second user in the second portion of the virtual world model, referring a mental process when they contain limitations that can practically be performed in the human mind, including for example, observations, evaluations, judgments, and opinions. Examples of claims that recite mental processes include: a claim to identifying head shape and applying hair designs, as similar to avatar, which is a process that can be practically performed in the human mind, In re Brown, 645 Fed. App'x 1014, 1016-17 (Fed. Cir. 2016) (non-precedential). Therefore, claims 8 and 16 are ineligible under 101. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-7 and 9-15 are rejected under 35 U.S.C. 103 as being unpatentable over US 10539787 B2 to Haddick et al (hereinafter ‘Haddick’). Regarding claim 1, Haddock discloses a method for updating a virtual world (column 66, lines 34-44, wherein the user may be able to control their view perspective relative to a 3D projected image, such as a 3D projected image associated with the external environment, a 3D projected image that has been stored and retrieved, as virtual world, a 3D displayed movie (such as downloaded for viewing), and the like. For instance, and referring again to FIG. 27, the user may be able to change the view perspective of the 3D displayed image 1512C, such as by turning their head, and where the live external environment and the 3D displayed image stay together even as the user turns their head, moves their position, and the like, as updating), comprising: receiving a plurality of input from respective devices of a plurality of users, the plurality of input corresponding to a physical environment of a first user (column 66 line 64 through column 67, line 2, wherein two separate eyepiece users may wish to view the same 3D map, game projection, point-of-interest projection, and the like, where the two viewers are not only seeing the same projected content, but where the projected content's view is synchronized between them, as including environment of the first user); updating a virtual world model based on the received plurality of input, the virtual world model corresponding to the physical environment of the first user (column 93, lines 22-25, wherein a soldier clearing a village may virtually mark a house as cleared by associating a message or identifier with the house, such as a big X marking the location of the house, as updating); and transmitting updated information corresponding to respective portions of the virtual world model to a second user, wherein the updated information indicates whether any portion of the updated information needs to be displayed to the second user (column 93, lines 25-32, wherein the soldier may indicate that only other American soldiers may be able to receive the location-based content, as whether the updated information need to be displayed. When other American soldiers pass the house, they may receive an indication automatically, such as by seeing the virtual ‘X’ on the side of the house if they have an eyepiece or some other augmented reality-enabled device, or by receiving a message indicating that the house has been cleared, as the transmitted updated portion information). In as much as Applicant may disagrees with the Examiner’s assessment of the updated information indicates whether any portion of the updated information needs to be displayed to the second user, Haddick also, in alternative approach, discloses determining whether to display the updated image to one or more user or not (column 70 line 63 through column 71, line 2, wherein an icon representing an incoming email may indicate an email being received, as the virtual world update. The user may notice the icon, and choose to ignore it (such as the icon disappearing after a period of time if not activated, such as by a gaze or some other control facility. Alternately, the user may notice the visual indicator and choose to ‘active’ it by gazing in the direction of the visual indicator, as whether display the image update). Therefore, it would have been obvious to one of ordinary skill in the art to combine the updated information indicates whether any portion of the updated information needs to be displayed to the second user, with the updating of the model Haddick’s method so that to reduce clutter in the narrow portion of the user's visual field around the gaze direction where the eye's highest visual input resides (column 70, lines 42-45). Regarding claim 2, Haddick discloses wherein the virtual world model resides on a networked memory (column 66, lines 12-20, wherein the system is able to determine the location of the house 1508C and provide location information 1514C and a 3D map superimposed onto the user's view of the environment. In embodiments, the information associated with an environmental feature may be provided by an external facility, inherently as some network memory, such as communicated with through a wireless communication connection, stored internal to the eyepiece, such as downloaded to the eyepiece for the current location,). Regarding claim 3, Haddick discloses wherein the first user and the second user are located at respective different locations (column 99, line 61 through column 100, line 4, wherein the game may be scheduled, and in some instances, players, as users, may select a particular time and place for the game, distribute directions to the site where the game will be played, etc. Later, the players meet and check into the game, with one or more players using the augmented reality glasses. Participants then play the game and if applicable, the game results and any statistics (scores of the players, game times, etc.) may be stored. Once the game has begun, the location may change for different players in the game, sending one player to one location and another player or players to a different location). Regarding claim 4, Haddick discloses wherein each of the respective devices of the plurality of users is selected from the group consisting of FOV cameras, other cameras, sensors, eye tracking first devices, and audio first devices (column 130, line 64 through column 131, line 3, wherein user action capture inputs and/or devices may include a head tracking system, camera, voice recognition system, body movement sensor (e g kinetic sensor), eye-gaze detection system, tongue touch pad, sip-and-puff systems, joystick, cursor, mouse, touch screen, touch sensor, finger tracking devices, 3D/2D mouse, inertial movement tracking. . ). Regarding claim 5, Haddick discloses the method further comprising: transmitting the updated information corresponding to the respective portions of the virtual world model to the first user, wherein the updated information indicates whether any portion of the updated information needs to be displayed to the first user (column 70, line 63 through column 71, line 2, wherein an icon representing an incoming email may indicate an email being received. The user may notice the icon, and choose to ignore it, as updated information not to be displayed, (such as the icon disappearing after a period of time if not activated, such as by a gaze or some other control facility). Alternately, the user may notice the visual indicator and choose to ‘active’ it by gazing in the direction of the visual indicator, as to be displayed.). Regarding claim 6, Haddick discloses the method further comprising: transmitting the updated information corresponding to the respective portions of the virtual world model to the plurality of users, wherein the updated information indicates whether any portion of the updated information needs to be displayed to each of the plurality of users (column 93, lines 22-32, wherein a soldier clearing a village may virtually mark a house as cleared by associating a message or identifier with the house, such as a big X marking the location of the house. The soldier may indicate that only other American soldiers may be able to receive the location-based content. When other American soldiers pass the house, they may receive an indication automatically, such as by seeing the virtual ‘X’ on the side of the house if they have an eyepiece or some other augmented reality-enabled device, or by receiving a message indicating that the house has been cleared, as updated information i.e. the X is only displaying to plurality of other users). Regarding claim 7, Haddick discloses the method further comprising: receiving second input from a second device of the second user, the second input corresponding to a physical environment of the second user; updating the virtual world model based on the received second input, the virtual world model corresponding to the physical environment of the first user; and transmitting second updated information corresponding to a second portion of the virtual world model to the first user (column 67, lines 2-8, wherein two users may want to jointly view a 3D map of a region, and the image is synchronized such that the one user may be able to point at a position on the 3D map, as updated information of the second user, that the other user is able to see and interact with, as the first user. The two users may be able to move around the 3D map and share a virtual-physical interaction between the two users and the 3D map, as the common physical environment of both users, and the like), wherein the second updated information indicates whether any portion of the second updated information needs to be displayed to the first user (Column 67, lines 8-18, wherein further, a group of eyepiece wearers may be able to jointly interact with a projection as a group. In this way, two or more users may be able to have a unified augmented reality experience through the coordination-synchronization of their eyepieces. Synchronization of two or more eyepieces may be provided by communication of position information between the eyepieces such as absolute position information, relative position information, translation and rotational position information, and the like, such as from position sensors as described herein (e.g. gyroscopes, IMU, GPS, and the like), translation and rotational position information, and the like, such as from position sensors , inherently including act of whether to display the update to one or more other users based on e.g. position of the user). Regarding claim 9, Haddick discloses an augmented reality display system (fig. 46, system 2700), comprising: an image capturing device of the augmented reality display system to capture first input corresponding to a physical environment of a first user, wherein the image capturing device comprises one or more image capturing sensors (column 61, lines 31-37, wherein the control device may control a pointing function associated with the viewed surrounding environment. The pointing function may be placing a cursor on a viewed object in the surrounding environment. The viewed object's location position may be determined by the processor in association with a camera integrated with the eyepiece); and a processor coupled to the image capturing device (column 61, lines 35-37, wherein the viewed object's location position may be determined by the processor in association with a camera integrated with the eyepiece.) and configured to: receive a plurality of input from respective augmented reality devices of a plurality of other users, the plurality of input corresponding to a physical environment of the first user (column 66 line 64 through column 67, line 2, wherein two separate eyepiece users may wish to view the same 3D map, game projection, point-of-interest projection, and the like, where the two viewers are not only seeing the same projected content, but where the projected content's view is synchronized between them, as including environment of the first user); update a virtual world model based on the received plurality of input, the virtual world model corresponding to the physical environment of the first user (column 93, lines 22-25, wherein a soldier clearing a village may virtually mark a house as cleared by associating a message or identifier with the house, such as a big X marking the location of the house, as updating); and transmit updated information corresponding to respective portions of the virtual world model to a second user (column 67, lines 2-8, wherein two users may want to jointly view a 3D map of a region, and the image is synchronized such that the one user may be able to point at a position on the 3D map, as updated information of the second user, that the other user is able to see and interact with, as the first user. The two users may be able to move around the 3D map and share a virtual-physical interaction between the two users and the 3D map, as the common physical environment of both users, and the like), wherein the updated information indicates whether any portion of the updated information needs to be displayed to the second user and whether any portion of the updated information needs to be displayed to each of the plurality of other users (Column 67, lines 8-18, wherein further, a group of eyepiece wearers may be able to jointly interact with a projection as a group. In this way, two or more users may be able to have a unified augmented reality experience through the coordination-synchronization of their eyepieces. Synchronization of two or more eyepieces may be provided by communication of position information between the eyepieces such as absolute position information, relative position information, translation and rotational position information, and the like, such as from position sensors as described herein (e.g. gyroscopes, IMU, GPS, and the like), translation and rotational position information, and the like, such as from position sensors, inherently as including act of whether to display the update to any one or more users based on e.g. position of the user if not at the predetermined coordinates). In as much as Applicant may disagrees with the Examiner’s assessment of the updated information indicates whether any portion of the updated information needs to be displayed to the second user, Haddick also, in alternative approach, discloses determining whether to display the updated image to a user or not (column 70 line 63 through column 71, line 2, wherein an icon representing an incoming email may indicate an email being received, as the virtual world update. The user may notice the icon, and choose to ignore it (such as the icon disappearing after a period of time if not activated, such as by a gaze or some other control facility. Alternately, the user may notice the visual indicator and choose to ‘active’ it by gazing in the direction of the visual indicator, as whether display the image update). Therefore, it would have been obvious to one of ordinary skill in the art to combine the updated information indicates whether any portion of the updated information needs to be displayed to the second user, with the updating of the model Haddick’s method so that to reduce clutter in the narrow portion of the user's visual field around the gaze direction where the eye's highest visual input resides (column 70, lines 42-45). Regarding system claims 10-15, please refer to the corresponding method claims 2-7 above for further teachings. Claims 8 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Haddick in view of US 2017/0113141 A1 to AROZON et al (hereinafter ‘ARONZON’) Regarding claims 8 and 16, Haddick discloses wherein the second updated information corresponds to movement of an element of the second user in the second portion of the virtual world model (column 101, lines 26-31, wherein such elements associated with users may include weapons, messages, currency, a 3D image of the user and the like. Based on a user's location or other data, he or she may encounter, view, or engage, by any means, other users and 3D elements associated with other users). However, Haddick does not specifically disclose wherein the second updated information corresponds to movement of an avatar of the second user in the second portion of the virtual world model. AROZON discloses the second updated information corresponds to movement of an avatar of the second user (Para [0017], wherein the viewing apparatus 190 can utilize, for example, liquid crystal display technology to superimpose virtual gaming information (e.g., avatar representative of another player, virtual objects or obstructions, etc.) onto a transparent viewing display (Para [0070], wherein If the first player 814 moves the physical shield to conceal himself from the view of the second player 854, in the virtual gaming space, then the gaming server 730 can detect the movement and position of the shield and map this movement and position to virtual gaming space such that the virtual first player is shield from the view of the second player, as the portion determined not to be seen by the second player). Haddick and AROZON are combinable because they both disclose virtual image sharing between multiple users. Therefore, before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to combine the second updated information corresponds to movement of an avatar of the second user in the second portion of the virtual world mode of AROZON’s method with Haddick’s because it enables a user to see real-world objects/users in a location of the user (Para [ 0017]). Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHERVIN K NAKHJAVAN whose telephone number is (571)272-5731. The examiner can normally be reached Monday-Friday 9:00-05:00 PST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sue Lefkowitz can be reached at (571)272-3638. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SHERVIN K NAKHJAVAN/ Primary Examiner, Art Unit 2672
Read full office action

Prosecution Timeline

Apr 15, 2024
Application Filed
Feb 27, 2026
Non-Final Rejection — §101, §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602766
METHOD, APPARATUS, DEVICE, MEDIUM AND PRODUCT FOR DETECTING ALIGNMENT OF BATTERY ELECTRODE PLATES
2y 5m to grant Granted Apr 14, 2026
Patent 12597159
SYSTEM, INFORMATION PROCESSING APPARATUS, METHOD, AND COMPUTER-READABLE MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12592313
ANALYZING SURGICAL VIDEOS TO IDENTIFY A BILLING CODING MISMATCH
2y 5m to grant Granted Mar 31, 2026
Patent 12579671
MINIATURIZED PHASE CALIBRATION APPARATUS FOR TIME-OF-FLIGHT DEPTH CAMERA
2y 5m to grant Granted Mar 17, 2026
Patent 12561791
METHOD TO CALIBRATE, PREDICT, AND CONTROL STOCHASTIC DEFECTS IN EUV LITHOGRAPHY
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
88%
Grant Probability
99%
With Interview (+10.9%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 616 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month