Prosecution Insights
Last updated: April 19, 2026
Application No. 18/202,102

SYSTEMS, METHODS, AND DEVICES FOR A VIRTUAL ENVIRONMENT REALITY MAPPER

Non-Final OA §103§112
Filed
May 25, 2023
Examiner
MAZUMDER, SAPTARSHI
Art Unit
2612
Tech Center
2600 — Communications
Assignee
Meetkai Inc.
OA Round
3 (Non-Final)
64%
Grant Probability
Moderate
3-4
OA Rounds
2y 8m
To Grant
76%
With Interview

Examiner Intelligence

Grants 64% of resolved cases
64%
Career Allow Rate
241 granted / 375 resolved
+2.3% vs TC avg
Moderate +12% lift
Without
With
+11.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
27 currently pending
Career history
402
Total Applications
across all art units

Statute-Specific Performance

§101
10.2%
-29.8% vs TC avg
§103
50.6%
+10.6% vs TC avg
§102
6.8%
-33.2% vs TC avg
§112
19.5%
-20.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 375 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/10/2025 has been entered. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-19 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Claims 1 and 12 recite “a virtual avatar configured to mirror real-world behaviors based on based on motion tracking sensor data……wherein said mirroring comprises replicating user movement detected by said sensors in the avatar's actions within the virtual environment” and Claim 19 recites ““mirror real-world behaviors by replicating user movement detected by said sensors in the avatar's actions within the virtual environment”. Applicant doesn’t have support for this limitation. No where in the specification it is written that generation of avatar or mirroring of real world happened based on sensor data. The closest support available “[0030]….VR headsets may communicate with the system via a cable or wirelessly and include motion tracking sensors to track user movement, thus enabling a 360-degree world. VR headsets may also connect to smartphones which now provide an even more real-world experience using the smartphone's motion sensors and other built in sensors in conjunction with the VR headset. [0101] A Persona Creator 1730 may be utilized to create the Character by combining the Persona, Language Module, and Avatar. In one embodiment, a Character Controller 1725 may be in communication with the Persona Creator 1730 and Logic Controller 1720 in order to create as output, a set of Character Actions 1727. The Personal Creator 1730 may be in communication with an Avatar Creator 1760 that generates the 3D avatar using procedural generation, and an AI Personifier 1740 which adds personality to the character by using ML algorithm for emotion mimicking”. Claims 1, 12 and 19 also recite “syncing the functions of a set of devices present in a virtual world environment with a set of devices present in a physical world environment based on parallel, user-specific data streams”. Applicant doesn’t have support syncing based on user-specific data streams. It is to note that not all the components require user specific data and also according to Fig. 13 all user’s preferences get aggregated. Claims 2-11 and 13-18 are also rejected by virtue of dependency. Claim 16 recites “mimic user emotions in the avatar using machine learning algorithms based on sensor input”. Applicant doesn’t have support mimicking based on sensor input. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-8, 10-12 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Ponce et al. (US Patent No. 11756260 “Ponce”) in view of Loeb et al. (US Patent No. 10728338 “Loeb”), Branson et al. (US Patent No. 8564621 “Branson”) and Borchetta et al. (US Pat. Pub. No. 20190197590 “Borchetta”). Regarding claim 1 Ponce teaches A method (Col 1 lines 33-34 “ The method includes receiving a request to display a target three-dimensional environment in a virtual reality system”) comprising: generating, by a computing device having a processor and addressable memory (Col 2 lines 13-17 “In another aspect, this document describes one or more non-transitory computer-readable storage devices coupled to one or more processors and having instructions stored thereon which, when executed by the one or more processors, cause the one or more processors to perform various operations”), a 3D virtual space for a given point of interest (POI) (Col 1 lines 64-Col 2 lines 3 “In an example context, a user can access a 3D visualization application to view a map of properties (e.g., houses, buildings, office areas, boats, event centers, shopping centers, transportation centers, etc.) that are for available (e.g., for sale or could be built) in a particular region of interest (e.g., within a given radius) around a point of interest (e.g., center of a city)”); determining whether a set of user preferences are available within the generated 3D virtual space and customize the 3D virtual space based on the availability of the set of user preferences (Col 4lines 14-21 “In some implementations, the user can view the property in a mode that reflects a preferred style of the user. The preferred style property display can provide a visual assistance to regular users who have a strong preference for a house style, which could be implemented in an available property that is currently designed in a significantly different style, assisting regular users to evaluate property based on its full potential. Col 14 lines 63-65 “FIG. 3A is a flow chart of an example process 300 for displaying a 3-D environment from a stream of input images of a scene filtered and updated based on user preferences”); determining a generated 3D environment based on the customized 3D virtual space and availability of the set of user preferences (Col 22 lines 39-49 “An updated content of the target 3D environment is determined (314). The updated content of the target 3D environment can include a filtered target 3D environment generated by the server system. The filtering process can include object identification using depth maps and colored images, object classification to differentiate between structural features (e.g., walls, windows, entryways), fixed features (lighting fixtures, major kitchen appliances, window treatments, etc.), and removable features (e.g., pieces of furniture and décor) of the 3D environment”), wherein determining the generated 3D environment is further based on receiving data from a set of components and executing at least one of: an augmented reality sync component (integral part of processor 123 of Fig. 1), wherein the augmented reality sync component is executed based on receiving new augmented reality input via checking for augmented reality transmission data (Col 8 lines 11-20 “The display 132 can be worn by the user as part of a headset such that a user may wear the display over their eyes like a pair of goggles or glasses.” Col 10 lines 27-37 “In some implementations, the new content is automatically pushed to the user device 106 and the mobile device 108 as it becomes available. In some implementations, the VR application 130 and the VR controller application 140 can be configured to request approval to display new content. In response to receiving, by the user device 106 and/or the mobile device 108, a user input requesting the new content, the user device 106 and the mobile device 108 can retrieve, from the server system 104, a new content item (auxiliary data 120) that is available for virtual display by the user device 106 and the mobile device 108”); a video/audio reality mapper transmission component (integral part of processor 123 of Fig. 1), wherein the video/audio reality mapper transmission component is executed based on receiving new video/audio transmission via checking for video/audio transmission data (Col 10 lines 27-37 “In some implementations, the new content is automatically pushed to the user device 106 and the mobile device 108 as it becomes available.” Col 20 lines 16-17 and lines 36-38 “Color images are frames of two-dimensional (2-D) images or videos captured by a camera…..The server system receives as input each of a plurality of color images in the stream of color images. The server system may process each color image in the stream of color images”. The new content/images are checked for their availability and sends to the user device when it is available ); Even though Ponce teaches IOT (Col 19 line 21 “. For example, an online option of connecting to an IoT”) but is silent about an Internet of Things (IoT) sync component, wherein the IoT sync component is executed based on receiving physical/virtual input via checking for IoT input; Loeb teaches an Internet of Things (IoT) sync component, wherein the IoT sync component is executed based on receiving physical/virtual input via checking for IoT input (Col 34 lines 18-29 “A physical IoT controller 606 may be used to register, transform, and map physical functional models (input, output, and state) (e.g., stored in storage 616) of individual and/or group of IoT to 3D virtual behavior models in a virtual world. For example, the physical IoT controller 606 may determine and register locations of IoT devices and/or sensors such as sensors in IoT devices 626 and/or sensors in user devices 602. A physical IoT controller 606 may synchronize the physical functional and virtual behavior models (e.g., stored in stored in storage 616), control and adjust physical IoT configurations”. Col 35 lines 5-12 “The cooperative control actions may be provided by the virtual world manager 610 to adjust and increase outputs of one or more actuators (e.g., physical world controllers 606) to support a particular zone that requires more resources or to offload overloaded devices. For example, when an overloaded heater with very high “on-duration” is detected, the heater may be viewed in the virtual world using the color red”); Loeb and Ponce are analogous as both of them are related to data processing. Therefore it would have been obvious for an ordinary skilled person in the art before the effective filing date of claimed invention to have modified Ponce by having an Internet of Things (IoT) sync component, wherein the IoT sync component is executed based on receiving physical/virtual input via checking for IoT input as taught by Loeb and use that functionality with Ponce’s processor. The motivation for the above is to keep consistency between real and virtual world. Even though Ponce as modified by Loeb teaches a virtual and physical item sync component (integral part of processor 123 of Fig. 1), wherein the virtual and physical item sync component is executed based on receiving new virtual and physical item via checking for virtual and physical item data, (Ponce Col 10 lines 27-37 “In some implementations, the new content is automatically pushed to the user device 106 and the mobile device 108 as it becomes available. In some implementations, the VR application 130 and the VR controller application 140 can be configured to request approval to display new content. In response to receiving, by the user device 106 and/or the mobile device 108, a user input requesting the new content, the user device 106 and the mobile device 108 can retrieve, from the server system 104, a new content item (auxiliary data 120) that is available for virtual display by the user device 106 and the mobile device 108” Here “new content” can include physical or virtual content,) but is silent about wherein the virtual and physical item sync component is further based on receiving data from at least one of: a mail transaction sync component, wherein the mail transaction sync component is executed based on receiving new mail and third party data that comprise mail data based on checking for mail and third party data; and a third party transaction sync component, wherein the third party transaction sync component is executed based on receiving new mail and third party data that comprise third party data based on checking for mail and third party data; Branson teaches virtual and physical item sync component (integral part of processor 122) is further based on receiving data from at least one of: a mail transaction sync component, wherein the mail transaction sync component is executed based on receiving new mail and third party data that comprise mail data based on checking for mail and third party data (Col 5 lines 2-17 “. Furthermore, embodiments of the invention may synchronize the virtual item with the real-world eBook reader 106, such that the virtual item in the virtual world may reflect any changes to the physical eBook reader 106 existing in the real world…..Upon receiving the state change information (i.e., the page turn) from the eBook reader 106, the client system 120 may replicate this change with the virtual world system 160 using the network 140. The virtual world server may then update the virtual item in the virtual world to reflect the page turn”); Branson and Ponce modified by Loeb are analogous as both of them are related to data processing. Therefore it would have been obvious for an ordinary skilled person in the art before the effective filing date of claimed invention to have modified Ponce modified by Loeb by having the virtual and physical item sync component is further based on receiving data from at least one of: a mail transaction sync component, wherein the mail transaction sync component is executed based on receiving new mail and third party data that comprise mail data based on checking for mail and third party data; and a third party transaction sync component, wherein the third party transaction sync component is executed based on receiving new mail and third party data that comprise third party data based on checking for mail and third party data similar to having virtual and physical item sync component is further based on receiving data from at least one of: a mail transaction sync component, wherein the mail transaction sync component is executed based on receiving new mail and third party data that comprise mail data based on checking for mail and third party data as taught by Branson and use that functionality with Ponce modified by Loeb’s processor. The motivation for the above is to keep consistency between real and virtual world. Even though Ponce modified by Loeb and Branson teaches the persona embodiment of physical reality component comprises a virtual avatar (Ponce Col 3 lines 2-3 “ For example, a virtual agent of an interactive presentation of a property can sit on one of the chairs”) but is silent about a persona embodiment of physical reality component, wherein the persona embodiment of physical reality component is executed based on if a user setting for persona embodiment is enabled via checking for embodiment notifications; and wherein the persona embodiment of physical reality component comprises a virtual avatar configured to mirror real-world behaviors based on motion tracking sensor data from a VR headset or smartphone, and user preference settings, wherein said mirroring comprises replicating user movement detected by said sensors in the avatar's actions within the virtual environment; Borchetta teaches a persona embodiment of physical reality component, wherein the persona embodiment of physical reality component is executed based on if a user setting for persona embodiment is enabled via checking for embodiment notifications (“[0083] In some embodiments, a data file (e.g., an image file and/or file storing the choices made by the user in customizing the avatar) may be stored in association with the user. For example, an identifier for uniquely identifying the avatar created for the user based on the choices made by the user may be generated and stored in association with the file and stored in a user account data record of the user. In some embodiments, the avatar created for the user based on the choices made by the user via menu 410 may be utilized or included in various marketing content for a campaign the user is participating in (e.g., in e-mail messages sent on behalf of the user”); wherein the persona embodiment of physical reality component comprises a virtual avatar configured to mirror real-world behaviors based on motion tracking sensor data from a VR headset or smartphone, and user preference settings, wherein said mirroring comprises replicating user movement detected by said sensors in the avatar's actions within the virtual environment (“[0034]…. For example, a User Device 120 may comprise a personal computer such as a desktop, laptop or tablet computer, a cellular telephone or a smartphone or other mobile device. [0041]… An input device may communicate with….. a video camera, a motion detector, a digital camera. [0078] Referring now to FIG. 4, illustrated therein is an example user interface 400 which may be output to a user for facilitating the generation of a customized avatar to represent a user. The term “avatar” as used herein unless indicated otherwise, may refer to a graphical representation, likeness or caricature of a user. [0083] In some embodiments, a data file (e.g., an image file and/or file storing the choices made by the user in customizing the avatar) may be stored in association with the user. For example, an identifier for uniquely identifying the avatar created for the user based on the choices made by the user . [0110]….. (ii) creating a first personalized graphical representation of the first user based on information received from the first user, thereby creating a first user graphical representation; [0116]….. In some embodiments, a user may be provided with software tools for modifying a default avatar's appearance by selecting options or values from a menu. In other embodiments, a user may be provided with software tools for uploading one or more images or graphics for use in customizing an avatar (e.g., an image of the user's face, which may be rendered into a caricature or other derivative representation)”); Borchetta and Ponce modified by Loeb and Branson are analogous as both of them are related to data processing. Therefore it would have been obvious for an ordinary skilled person in the art before the effective filing date of claimed invention to have modified Ponce modified by Loeb and Branson by a persona embodiment of physical reality component, wherein the persona embodiment of physical reality component is executed based on if a user setting for persona embodiment is enabled via checking for embodiment notifications and virtual avatar that is configured to mirror real-world behaviors based on physical sensor data and user preference settings as taught by Borchetta and use that functionality with Ponce’s processor. The motivation for the above is to customize avatar creation. Ponce modified by Loeb, Branson and Borchetta teaches updating the generated 3D environment to customize the 3D virtual space for a user based on the received data from at least one of the components from the set of components, and wherein the set of components are being continuously executed in real-time in a distributed, multi-modal synchronization framework thereby syncing the functions of a set of devices present in a virtual world environment with a set of devices present in a physical world environment based on parallel, user-specific data streams from said components, wherein each component provides data specific to the user and is processed concurrently to enable real-time synchronization across domains (Ponce Col 8 lines 26-32 “To provide accurate interactions between VR contents and the see-through reality on the display 132, the VR system 100 may substantially continuously process sensor data acquired by sensors 128 at a high frequency rate (e.g., higher than 100 Hz) configured to generate the environment reconstruction 136 of the physical world around the user accessing the user device 106”. Col 26 lines 50-56 “Similarly, while operations are depicted in the drawings and recited in the claims in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing can be advantageous”. Ponce teaches multiple tasks/components as shown above. Some of the tasks/components are included with Ponce Based on other secondary references. Ponce supports multitasking and parallel processing. So based on Ponce’s teaching all the components will be running concurrently as according to Ponce it is advantageous and to get better outcome). Regarding claim 2 Ponce modified by Loeb and Branson and Borchetta teaches wherein the IoT sync component is configured to provide a set of IoT devices that are remaining in sync between the virtual world environment and the physical world environment (Loeb Col 34 lines 22-26 “For example, the physical IoT controller 606 may determine and register locations of IoT devices and/or sensors such as sensors in IoT devices 626 and/or sensors in user devices 602. A physical IoT controller 606 may synchronize the physical functional and virtual behavior models”). Regarding claim 3 Ponce modified by Loeb and Branson and Borchetta teaches wherein the augmented reality sync component is configured to provide users in the physical world environment at a certain location, the ability to see users in the virtual world environment via AR lenses in the same location (Ponce Col 8 lines 11-20 “The display 132 can be worn by the user as part of a headset such that a user may wear the display over their eyes like a pair of goggles or glasses.”. Col 16 lines 27-32 “In some implementations, the scene can include a virtual projection of a person (e.g., an agent) who can virtually guide the user through the visualization of the target 3D environment. The virtual projection of the person can be added to the 3D model as a 3-D object mask of the person represented as sitting, standing, or walking”). Regarding claim 4 Ponce modified by Loeb and Branson and Borchetta teaches wherein the video/audio reality mapper transmission is configured to allow transmission of data from the physical world environment to the virtual world environment and vice versa (Ponce Col 6 lines 36-44 “Specifically, the processor 123 executes the algorithms and operations described in the illustrated figures, including the operations performing the functionality associated with the VR application 130 and VR controller application 140, as well as the various software modules, including the functionality for sending communications to and receiving transmissions from the sensors 102 170, the user device 106 and the mobile device 108”); Regarding claim 5 Ponce modified by Loeb and Branson and Borchetta teaches wherein the virtual and physical item sync component is configured to ensure virtual items in the virtual world environment and physical items in the physical world environment are properly in sync. (Branson Col 5 lines 2-17 “. Furthermore, embodiments of the invention may synchronize the virtual item with the real-world eBook reader 106, such that the virtual item in the virtual world may reflect any changes to the physical eBook reader 106 existing in the real world…..Upon receiving the state change information (i.e., the page turn) from the eBook reader 106, the client system 120 may replicate this change with the virtual world system 160 using the network 140. The virtual world server may then update the virtual item in the virtual world to reflect the page turn”); Regarding claim 6 Ponce modified by Loeb and Branson and Borchetta teaches wherein the virtual and physical item sync component is configured to synchronize a virtual item and a physical item by ensuring that for a given virtual item, there is a corresponding physical item and vice versa (Branson Col 10 lines 19-27 “Generally, the synchronization represents that changes will be replicated from one item to its corresponding counterpart. In one embodiment, the synchronization is unilateral synchronization (i.e., changes are replicated in the direction of either physical-to-virtual or virtual-to-physical). In another embodiment, the synchronization is bilateral synchronization, where actions to the physical item are replicated to the virtual item 170, and actions to the virtual item 170 are replicated to the physical item”). Regarding claim 7 Ponce modified by Loeb and Branson and Borchetta teaches wherein the third party transaction sync component is configured to create and synchronize transactions between the virtual world environment and the physical world environment (Branson Col 5 lines 63-66 “In one embodiment of the invention, the physical item and the virtual item may be synchronized either in real-time or periodically. In an alternate embodiment, the physical item and the virtual item may also be synchronized in response to certain events”). Regarding claim 8 Ponce modified by Loeb and Branson and Borchetta teaches wherein the mail transaction sync component is configured to synchronize mail transactions between the virtual world environment and the physical world environment. (Branson Col 5 lines 63-66 “In one embodiment of the invention, the physical item and the virtual item may be synchronized either in real-time or periodically. In an alternate embodiment, the physical item and the virtual item may also be synchronized in response to certain events”). Regarding claim 10 Ponce modified by Loeb and Branson and Borchetta teaches wherein the persona embodiment of physical reality component is configured to create a non-player character (NPC) that embodies physical reality (Ponce Col 16 lines 28-31 “ The visual objects can be stationary objects or moving objects. In some implementations, the scene can include a virtual projection of a person (e.g., an agent) who can virtually guide the user through the visualization of the target 3D environment”). Regarding claim 11 Ponce modified by Loeb and Branson and Borchetta teaches wherein user preferences are used to provide a customized 3D virtual space version of a physical POI that is personalized to the user (Ponce Col 4lines 14-21 “In some implementations, the user can view the property in a mode that reflects a preferred style of the user. The preferred style property display can provide a visual assistance to regular users who have a strong preference for a house style, which could be implemented in an available property that is currently designed in a significantly different style, assisting regular users to evaluate property based on its full potential. Col 14 lines 63-65 “FIG. 3A is a flow chart of an example process 300 for displaying a 3-D environment from a stream of input images of a scene filtered and updated based on user preferences”); Regarding claim 12 Ponce teaches A computing device having a processor and addressable memory, the processor configured to execute a set of components (Col 2 lines 13-17 “In another aspect, this document describes one or more non-transitory computer-readable storage devices coupled to one or more processors and having instructions stored thereon which, when executed by the one or more processors, cause the one or more processors to perform various operations”), comprising: an augmented reality sync component (integral part of processor 123 of Fig. 1),, wherein the augmented reality sync component is executed based on receiving new augmented reality input via checking for augmented reality transmission data; (Col 8 lines 11-20 “The display 132 can be worn by the user as part of a headset such that a user may wear the display over their eyes like a pair of goggles or glasses.” Col 10 lines 27-37 “In some implementations, the new content is automatically pushed to the user device 106 and the mobile device 108 as it becomes available. In some implementations, the VR application 130 and the VR controller application 140 can be configured to request approval to display new content. In response to receiving, by the user device 106 and/or the mobile device 108, a user input requesting the new content, the user device 106 and the mobile device 108 can retrieve, from the server system 104, a new content item (auxiliary data 120) that is available for virtual display by the user device 106 and the mobile device 108”); a video/audio reality mapper transmission component (integral part of processor 123 of Fig. 1),, wherein the video/audio reality mapper transmission component is executed based on receiving new video/audio transmission via checking for video/audio transmission data; (Col 10 lines 27-37 “In some implementations, the new content is automatically pushed to the user device 106 and the mobile device 108 as it becomes available.” Col 20 lines 16-17 and lines 36-38 “Color images are frames of two-dimensional (2-D) images or videos captured by a camera…..The server system receives as input each of a plurality of color images in the stream of color images. The server system may process each color image in the stream of color images”. The new content/images are checked for their availability and sends to the user device when it is available ); Even though Ponce teaches IOT (Col 19 line 21 “. For example, an online option of connecting to an IoT”) but is silent about an Internet of Things (IoT) sync component, wherein the IoT sync component is executed based on receiving physical/virtual input via checking for IoT input; Loeb teaches an Internet of Things (IoT) sync component, wherein the IoT sync component is executed based on receiving physical/virtual input via checking for IoT input (Col 34 lines 18-29 “A physical IoT controller 606 may be used to register, transform, and map physical functional models (input, output, and state) (e.g., stored in storage 616) of individual and/or group of IoT to 3D virtual behavior models in a virtual world. For example, the physical IoT controller 606 may determine and register locations of IoT devices and/or sensors such as sensors in IoT devices 626 and/or sensors in user devices 602. A physical IoT controller 606 may synchronize the physical functional and virtual behavior models (e.g., stored in stored in storage 616), control and adjust physical IoT configurations”. Col 35 lines 5-12 “The cooperative control actions may be provided by the virtual world manager 610 to adjust and increase outputs of one or more actuators (e.g., physical world controllers 606) to support a particular zone that requires more resources or to offload overloaded devices. For example, when an overloaded heater with very high “on-duration” is detected, the heater may be viewed in the virtual world using the color red”); Loeb and Ponce are analogous as both of them are related to data processing. Therefore it would have been obvious for an ordinary skilled person in the art before the effective filing date of claimed invention to have modified Ponce by having an Internet of Things (IoT) sync component, wherein the IoT sync component is executed based on receiving physical/virtual input via checking for IoT input as taught by Loeb and use that functionality with Ponce’s processor. The motivation for the above is to keep consistency between real and virtual world. Even though Ponce as modified by Loeb teaches a virtual and physical item sync component (integral part of processor 123 of Fig. 1), wherein the virtual and physical item sync component is executed based on receiving new virtual and physical item via checking for virtual and physical item data, (Ponce Col 10 lines 27-37 “In some implementations, the new content is automatically pushed to the user device 106 and the mobile device 108 as it becomes available. In some implementations, the VR application 130 and the VR controller application 140 can be configured to request approval to display new content. In response to receiving, by the user device 106 and/or the mobile device 108, a user input requesting the new content, the user device 106 and the mobile device 108 can retrieve, from the server system 104, a new content item (auxiliary data 120) that is available for virtual display by the user device 106 and the mobile device 108” Here “new content” can include physical or virtual content,) but is silent about wherein the virtual and physical item sync component is further based on receiving data from at least one of: a mail transaction sync component, wherein the mail transaction sync component is executed based on receiving new mail and third party data that comprise mail data based on checking for mail and third party data; and a third party transaction sync component, wherein the third party transaction sync component is executed based on receiving new mail and third party data that comprise third party data based on checking for mail and third party data; Branson teaches virtual and physical item sync component (integral part of processor 122) is further based on receiving data from at least one of: a mail transaction sync component, wherein the mail transaction sync component is executed based on receiving new mail and third party data that comprise mail data based on checking for mail and third party data (Col 5 lines 2-17 “. Furthermore, embodiments of the invention may synchronize the virtual item with the real-world eBook reader 106, such that the virtual item in the virtual world may reflect any changes to the physical eBook reader 106 existing in the real world…..Upon receiving the state change information (i.e., the page turn) from the eBook reader 106, the client system 120 may replicate this change with the virtual world system 160 using the network 140. The virtual world server may then update the virtual item in the virtual world to reflect the page turn”); Branson and Ponce modified by Loeb are analogous as both of them are related to data processing. Therefore it would have been obvious for an ordinary skilled person in the art before the effective filing date of claimed invention to have modified Ponce modified by Loeb by having the virtual and physical item sync component is further based on receiving data from at least one of: a mail transaction sync component, wherein the mail transaction sync component is executed based on receiving new mail and third party data that comprise mail data based on checking for mail and third party data; and a third party transaction sync component, wherein the third party transaction sync component is executed based on receiving new mail and third party data that comprise third party data based on checking for mail and third party data similar to having virtual and physical item sync component is further based on receiving data from at least one of: a mail transaction sync component, wherein the mail transaction sync component is executed based on receiving new mail and third party data that comprise mail data based on checking for mail and third party data as taught by Branson and use that functionality with Ponce modified by Loeb’s processor. The motivation for the above is to keep consistency between real and virtual world. Even though Ponce modified by Loeb and Branson teaches the persona embodiment of physical reality component comprises a virtual avatar (Ponce Col 3 lines 2-3 “ For example, a virtual agent of an interactive presentation of a property can sit on one of the chairs”) but is silent about a persona embodiment of physical reality component, wherein the persona embodiment of physical reality component is executed based on if a user setting for persona embodiment is enabled via checking for embodiment notifications; and wherein the persona embodiment of physical reality component comprises a virtual avatar configured to mirror real-world behaviors based on motion tracking sensor data from a VR headset or smartphone, and user preference settings, wherein said mirroring comprises replicating user movement detected by said sensors in the avatar's actions within the virtual environment; Borchetta teaches a persona embodiment of physical reality component, wherein the persona embodiment of physical reality component is executed based on if a user setting for persona embodiment is enabled via checking for embodiment notifications (“[0083] In some embodiments, a data file (e.g., an image file and/or file storing the choices made by the user in customizing the avatar) may be stored in association with the user. For example, an identifier for uniquely identifying the avatar created for the user based on the choices made by the user may be generated and stored in association with the file and stored in a user account data record of the user. In some embodiments, the avatar created for the user based on the choices made by the user via menu 410 may be utilized or included in various marketing content for a campaign the user is participating in (e.g., in e-mail messages sent on behalf of the user”); wherein the persona embodiment of physical reality component comprises a virtual avatar configured to mirror real-world behaviors based on motion tracking sensor data from a VR headset or smartphone, and user preference settings, wherein said mirroring comprises replicating user movement detected by said sensors in the avatar's actions within the virtual environment (“[0034]…. For example, a User Device 120 may comprise a personal computer such as a desktop, laptop or tablet computer, a cellular telephone or a smartphone or other mobile device. [0041]… An input device may communicate with….. a video camera, a motion detector, a digital camera. [0078] Referring now to FIG. 4, illustrated therein is an example user interface 400 which may be output to a user for facilitating the generation of a customized avatar to represent a user. The term “avatar” as used herein unless indicated otherwise, may refer to a graphical representation, likeness or caricature of a user. [0083] In some embodiments, a data file (e.g., an image file and/or file storing the choices made by the user in customizing the avatar) may be stored in association with the user. For example, an identifier for uniquely identifying the avatar created for the user based on the choices made by the user . [0110]….. (ii) creating a first personalized graphical representation of the first user based on information received from the first user, thereby creating a first user graphical representation; [0116]….. In some embodiments, a user may be provided with software tools for modifying a default avatar's appearance by selecting options or values from a menu. In other embodiments, a user may be provided with software tools for uploading one or more images or graphics for use in customizing an avatar (e.g., an image of the user's face, which may be rendered into a caricature or other derivative representation)”); Borchetta and Ponce modified by Loeb and Branson are analogous as both of them are related to data processing. Therefore it would have been obvious for an ordinary skilled person in the art before the effective filing date of claimed invention to have modified Ponce modified by Loeb and Branson by a persona embodiment of physical reality component, wherein the persona embodiment of physical reality component is executed based on if a user setting for persona embodiment is enabled via checking for embodiment notifications and virtual avatar that is configured to mirror real-world behaviors based on physical sensor data and user preference settings as taught by Borchetta and use that functionality with Ponce’s processor. The motivation for the above is to customize avatar creation. Ponce modified by Loeb and Branson and Borchetta teaches wherein the computing device is further configured to: generate a 3D virtual space for a given point of interest (POI); (Ponce Col 1 lines 64-Col 2 lines 3 “In an example context, a user can access a 3D visualization application to view a map of properties (e.g., houses, buildings, office areas, boats, event centers, shopping centers, transportation centers, etc.) that are for available (e.g., for sale or could be built) in a particular region of interest (e.g., within a given radius) around a point of interest (e.g., center of a city)”); determine whether a set of user preferences are available within the generated 3D virtual space and customize the 3D virtual space based on the availability of the set of user preferences (Ponce Col 4lines 14-21 “In some implementations, the user can view the property in a mode that reflects a preferred style of the user. The preferred style property display can provide a visual assistance to regular users who have a strong preference for a house style, which could be implemented in an available property that is currently designed in a significantly different style, assisting regular users to evaluate property based on its full potential. Col 14 lines 63-65 “FIG. 3A is a flow chart of an example process 300 for displaying a 3-D environment from a stream of input images of a scene filtered and updated based on user preferences”); determine a generated 3D environment based on the customized 3D virtual space and availability of the set of user preferences, wherein the generated 3D environment is further determined based on receiving data from the set of components being executed (Ponce Col 22 lines 39-49 “An updated content of the target 3D environment is determined (314). The updated content of the target 3D environment can include a filtered target 3D environment generated by the server system. The filtering process can include object identification using depth maps and colored images, object classification to differentiate between structural features (e.g., walls, windows, entryways), fixed features (lighting fixtures, major kitchen appliances, window treatments, etc.), and removable features (e.g., pieces of furniture and décor) of the 3D environment”); and updating the generated 3D environment to customize the 3D virtual space for a user based on the received data from at least one of the components from the set of components, and wherein the set of components are being continuously executed in real-time in a distributed, multi-modal synchronization framework thereby syncing the functions of a set of devices present in a virtual world environment with a set of devices present in a physical world environment based on parallel, user-specific data streams from said components, wherein each component provides data specific to the user and is processed concurrently to enable real-time synchronization across domains (Ponce Col 8 lines 26-32 “To provide accurate interactions between VR contents and the see-through reality on the display 132, the VR system 100 may substantially continuously process sensor data acquired by sensors 128 at a high frequency rate (e.g., higher than 100 Hz) configured to generate the environment reconstruction 136 of the physical world around the user accessing the user device 106”. Col 26 lines 50-56 “Similarly, while operations are depicted in the drawings and recited in the claims in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing can be advantageous”. Ponce teaches multiple tasks/components as shown above. Some of the tasks/components are included with Ponce Based on other secondary references. Ponce supports multitasking and parallel processing. So based on Ponce’s teaching all the components will be running concurrently as according to Ponce it is advantageous and to get better outcome). Regarding claim 17 Ponce modified by Loeb and Branson and Borchetta wherein syncing comprises bidirectional updates of device states and functions between virtual and physical environments (Branson Col 2 lines Col 16-20 “Embodiments of the invention may receive a request to create a virtual item in a virtual world, based on a real-world item. The created virtual item can then be synchronized with the real-world item. The synchronization may be unidirectional or bidirectional”). Claim(s) 9 is rejected under 35 U.S.C. 103 as being unpatentable over Ponce modified by Loeb and Branson and Borchetta as applied to claim 8 above, and further in view of Bendale et al. (US Pat. Pub. No. 20180341811 “Bendale”). Regarding claim 9 Ponce modified by Loeb and Branson and Borchetta teaches bilateral synchronization (Branson col 10 Lines 25-27 “ In another embodiment, the synchronization is bilateral synchronization, where actions to the physical item are replicated to the virtual item 170, and actions to the virtual item 170 are replicated to the physical item.”) but is silent about wherein the user mails a package of items in the virtual world environment and a package of the same items is sent in the physical world environment; Bendale teaches user mails a package of items in virtual world environment and a package of the same items is sent in physical world environment (“[0086]….The system may allow users to send messages into virtual environments or out into the world via mobile connection or desktop application. Content may be shared and received in both virtual environments or mobile connection or desktop application”); Bendale and Ponce modified by Loeb and Branson and Borchetta are analogous as both of them are related to data processing. Therefore it would have been obvious for an ordinary skilled person in the art before the effective filing date of claimed invention to have modified Ponce modified by Loeb and Branson and Borchetta by having user mailing a package of items in virtual world environment and a package of the same items is sent in physical world environment as taught by Bendale. The motivation for the above is to enhance applicability of Ponce by having additional method of synchronization. Claim(s) 13-15 are rejected under 35 U.S.C. 103 as being unpatentable over Ponce modified by Loeb and Branson and Borchetta as applied to claim 12 above, and further in view of Bouse et al. (US Pat. Pub. No. 20120109882 “Bouse”). Regarding claim 13 Ponce modified by Loeb and Branson and Borchetta is silent about wherein the set of components further comprises: a component to group virtual aggregation for third party transactions. Bouse teaches a component to group virtual aggregation for third party transactions (“[0021] Aggregated data is analyzed to determine at least one synergy or at least one substantially common preference amongst a plurality of Users, and then enables the Provisioner to use the aggregated data to customize at least one of an interaction and a transaction between the Provisioner and User”); Bouse and Ponce modified by Loeb and Branson and Borchetta are analogous as both of them are related to data processing. Therefore it would have been obvious for an ordinary skilled person in the art before the effective filing date of claimed invention to have modified Ponce modified by Loeb and Branson and Borchetta by having a component to group virtual aggregation for third party transactions as taught by Bouse and use that functionality with Ponce’s processor. The motivation for the above is that the third party users can customize their interaction (Bouse “[0021] Aggregated data is analyzed to determine at least one synergy or at least one substantially common preference amongst a plurality of Users, and then enables the Provisioner to use the aggregated data to customize at least one of an interaction and a transaction between the Provisioner and User”). Regarding claim 14 Ponce modified by Loeb, Branson, Borchetta and Bouse teaches wherein the component to group virtual aggregation for third party transactions is configured to take in a group of user preferences to create a transaction that is an aggregate of all user preferences (Bouse “[0021] Aggregated data is analyzed to determine at least one synergy or at least one substantially common preference amongst a plurality of Users, and then enables the Provisioner to use the aggregated data to customize at least one of an interaction and a transaction between the Provisioner and User”). Regarding claim 15 Ponce modified by Loeb, Branson, Borchetta and Bouse teaches wherein the component to group virtual aggregation for third party transactions is connected to the third party transaction sync component to transmit the created transaction that is an aggregate of all user preferences. (Bouse “[0021] Aggregated data is analyzed to determine at least one synergy or at least one substantially common preference amongst a plurality of Users, and then enables the Provisioner to use the aggregated data to customize at least one of an interaction and a transaction between the Provisioner and User”). Claim(s) 16 is rejected under 35 U.S.C. 103 as being unpatentable over Ponce modified by Loeb and Branson and Borchetta as applied to claim 1 above, and further in view of SOPPIN et al. US Pat. Pub. No. 20210256597 “Soppin”). Regarding claim 16 Ponce modified by Loeb and Branson and Borchetta is silent about the persona embodiment of physical reality component further comprises an AI Personifier configured to mimic user emotions in the avatar using machine learning algorithms based on sensor input and user preferences. Soppin teaches, mimic user emotions in the avatar using machine learning algorithms based on sensor input and user preferences (“[0032] In an embodiment, the computing system (103), upon successful authentication of the physical user (101), may generate a unique avatar indicative of the virtual user corresponding to the physical user (101). The avatar is generated based on the one or more user details using a Convolution neural network (CNN) based Artificial Intelligence (AI) technique. The one or more user details may include at least one of user credentials, a user age, a user gender, a user preferences, a biometric data, a one-time password and a payment information. For example, for the physical user (101) of age “60” and the user gender as “male” an avatar resembling a old person and for the physical user (101) of age “10” and the user gender as “female” an avatar resembling a kid may be generated. The avatar may be a graphic image or a humanoid resembling the physical user (101). Further, the computing system (103) may generate the virtual environment (105) comprising one or more virtual stores from a real-time video corresponding to one or more physical stores. The computing system (103) based on the physical store details, may receive the real-time video corresponding to the physical store captured by a plurality of cameras”); Soppin and Ponce modified by Loeb and Branson and Borchetta are analogous as both of them are related to data processing. Therefore it would have been obvious for an ordinary skilled person in the art before the effective filing date of claimed invention to have modified Ponce modified by Loeb and Branson and Borchetta by having persona embodiment of physical reality component further comprises an AI Personifier configured to mimic user emotions in the avatar using machine learning algorithms based on sensor input and user preferences similar to mimicking user emotions in the avatar using machine learning algorithms based on sensor input and user preferences as taught by Soppin. The motivation for the above is to have automation for avatar generation. Claim(s) 18 is rejected under 35 U.S.C. 103 as being unpatentable over Ponce modified by Loeb and Branson and Borchetta as applied to claim 1 above, and further in view of Whalin et al. (US Pat. Pub. No. 20190273627 “Whalin”). Regarding claim 18 Ponce modified by Loeb and Branson and Borchetta teaches receiving a set of user preferences from a plurality of users, each user preference comprising data indicative of at least one of a user's selections, interests, or requirements for a point of interest (POI) space or transaction; (Ponce Col 4lines 14-21 “In some implementations, the user can view the property in a mode that reflects a preferred style of the user. The preferred style property display can provide a visual assistance to regular users who have a strong preference for a house style, which could be implemented in an available property that is currently designed in a significantly different style, assisting regular users to evaluate property based on its full potential. Col 14 lines 63-65 “FIG. 3A is a flow chart of an example process 300 for displaying a 3-D environment from a stream of input images of a scene filtered and updated based on user preferences”) but is silent about aggregating the received user preferences for group virtual aggregation for third party transactions by: analyzing the set of user preferences using collaborative filtering, wherein collaborative filtering comprises identifying patterns or similarities among the user preferences of the plurality of users; generating a group preference profile based on the analysis, the group preference profile representing an aggregate of the user preferences and reflecting the collective interests or requirements of the group; using the group preference profile to facilitate or customize a third party transaction, such that the transaction is tailored to the aggregated preferences of the group of users; Whalin teaches aggregating the received user preferences for group virtual aggregation for third party transactions by: analyzing the set of user preferences using collaborative filtering, wherein collaborative filtering comprises identifying patterns or similarities among the user preferences of the plurality of users (“[0195] In embodiments, the present invention may implement a computer implemented method for providing recommendations for an in-person meeting group, comprising collecting user information (e.g. user information from a member user's activity on the web-based meeting facility, user information from a non-member user as a guest to a meeting group, user information from a user derived from a social network site, and the like), where the user information provides information related to topical interests and location information for at least one of multiple users;…..the recommendation may be based on a collaborative filtering algorithm that is based on analyzing similarities between interests of a user and interests of a member of a group”); generating a group preference profile based on the analysis, the group preference profile representing an aggregate of the user preferences and reflecting the collective interests or requirements of the group (“[0197] In embodiments, analytics and statistics may be applied and viewed for a meeting group or event through third-party sites, such as through Google's analytics platform. Through these sites an organizer or promoter may be able to learn how many page views are being received, what locations visitors are from, what pages they're looking at, when they visit, and the like. An organizer may be able to see which events get the most traffic, or if emails send a lot of people to the site, see what words people search for to get to the group of event page”); using the group preference profile to facilitate or customize a third party transaction, such that the transaction is tailored to the aggregated preferences of the group of users (“[0195]….. The recommendation may be emailed to the user, provided to the user though the user interface of the web-based meeting facility, provided to a third-party social networking site, and the like”); Whalin and Ponce modified by Loeb and Branson and Borchetta are analogous as both of them are related to user data processing. Therefore it would have been obvious for an ordinary skilled person in the art before the effective filing date of claimed invention to have modified Ponce modified by Loeb and Branson and Borchetta by aggregating the received user preferences for group virtual aggregation for third party transactions by: analyzing the set of user preferences using collaborative filtering, wherein collaborative filtering comprises identifying patterns or similarities among the user preferences of the plurality of users; generating a group preference profile based on the analysis, the group preference profile representing an aggregate of the user preferences and reflecting the collective interests or requirements of the group; using the group preference profile to facilitate or customize a third party transaction, such that the transaction is tailored to the aggregated preferences of the group of users as taught by Whalin. The motivation for the above is to summarize different users preference for customization. Claim(s) 19 is rejected under 35 U.S.C. 103 as being unpatentable over Ponce in view of Loeb, Branson and Borchetta and Bouse. Regarding claim 19 Ponce teaches A system comprising: a processor and addressable memory; a 3D virtual space generator component executable by the processor (Col 2 lines 13-17 “In another aspect, this document describes one or more non-transitory computer-readable storage devices coupled to one or more processors and having instructions stored thereon which, when executed by the one or more processors, cause the one or more processors to perform various operations”), and configured to: generate a 3D virtual space environment for a given point of interest (POI) a customization component executable by the processor and configured to: receive the generated 3D virtual space environment from the 3D virtual space generator component, (Col 1 lines 64-Col 2 lines 3 “In an example context, a user can access a 3D visualization application to view a map of properties (e.g., houses, buildings, office areas, boats, event centers, shopping centers, transportation centers, etc.) that are for available (e.g., for sale or could be built) in a particular region of interest (e.g., within a given radius) around a point of interest (e.g., center of a city)”); determine whether a set of user preferences are available within the generated 3D virtual space environment, and customize the POI space for a user based on the user preferences; (Col 4lines 14-21 “In some implementations, the user can view the property in a mode that reflects a preferred style of the user. The preferred style property display can provide a visual assistance to regular users who have a strong preference for a house style, which could be implemented in an available property that is currently designed in a significantly different style, assisting regular users to evaluate property based on its full potential. Col 14 lines 63-65 “FIG. 3A is a flow chart of an example process 300 for displaying a 3-D environment from a stream of input images of a scene filtered and updated based on user preferences. Col 22 lines 39-49 “An updated content of the target 3D environment is determined (314). The updated content of the target 3D environment can include a filtered target 3D environment generated by the server system. The filtering process can include object identification using depth maps and colored images, object classification to differentiate between structural features (e.g., walls, windows, entryways), fixed features (lighting fixtures, major kitchen appliances, window treatments, etc.), and removable features (e.g., pieces of furniture and décor) of the 3D environment”); However Ponce is silent about a group virtual aggregation component executable by the processor and configured to: receive the customized POI space and aggregate user preferences for group virtual aggregation for third party transactions; Bouse teaches a group virtual aggregation component executable by processor and configured to: receive customized POI space and aggregate user preferences for group virtual aggregation for third party transactions (“[0021] Aggregated data is analyzed to determine at least one synergy or at least one substantially common preference amongst a plurality of Users, and then enables the Provisioner to use the aggregated data to customize at least one of an interaction and a transaction between the Provisioner and User”); Bouse and Ponce are analogous as both of them are related to data processing. Therefore it would have been obvious for an ordinary skilled person in the art before the effective filing date of claimed invention to have modified Ponce modified by Loeb and Branson and Borchetta by having a group virtual aggregation component executable by the processor and configured to: receive the customized POI space and aggregate user preferences for group virtual aggregation for third party transactions as taught by Bouse and use that functionality with Ponce’s processor. The motivation for the above is that the third party users can customize their interaction (Bouse “[0021] Aggregated data is analyzed to determine at least one synergy or at least one substantially common preference amongst a plurality of Users, and then enables the Provisioner to use the aggregated data to customize at least one of an interaction and a transaction between the Provisioner and User”). Even though Ponce modified by Bouse teaches IOT (Ponce Col 19 line 21 “. For example, an online option of connecting to an IoT”) but is silent about an IoT sync component executable by the processor and configured to: receive the generated 3D virtual space environment and execute based on receiving physical or virtual input via checking for IoT input, and to provide IoT data to the 3D virtual space environment; Loeb teaches an IoT sync component executable by the processor and configured to: receive the generated 3D virtual space environment and execute based on receiving physical or virtual input via checking for IoT input, and to provide IoT data to the 3D virtual space environment (Col 34 lines 18-29 “A physical IoT controller 606 may be used to register, transform, and map physical functional models (input, output, and state) (e.g., stored in storage 616) of individual and/or group of IoT to 3D virtual behavior models in a virtual world. For example, the physical IoT controller 606 may determine and register locations of IoT devices and/or sensors such as sensors in IoT devices 626 and/or sensors in user devices 602. A physical IoT controller 606 may synchronize the physical functional and virtual behavior models (e.g., stored in stored in storage 616), control and adjust physical IoT configurations”. Col 35 lines 5-12 “The cooperative control actions may be provided by the virtual world manager 610 to adjust and increase outputs of one or more actuators (e.g., physical world controllers 606) to support a particular zone that requires more resources or to offload overloaded devices. For example, when an overloaded heater with very high “on-duration” is detected, the heater may be viewed in the virtual world using the color red”); Loeb and Ponce modified by Bouse are analogous as both of them are related to data processing. Therefore it would have been obvious for an ordinary skilled person in the art before the effective filing date of claimed invention to have modified Ponce modified by Bouse by having an IoT sync component executable by the processor and configured to: receive the generated 3D virtual space environment and execute based on receiving physical or virtual input via checking for IoT input, and to provide IoT data to the 3D virtual space environment as taught by Loeb and use that functionality with Ponce’s processor. The motivation for the above is to keep consistency between real and virtual world. Ponce modified by Bouse and Loeb teaches an augmented reality sync component executable by the processor and configured to: receive the generated 3D virtual space environment and execute based on receiving new augmented reality input via checking for augmented reality transmission data, and to provide augmented reality data to the 3D virtual space environment; (Ponce Col 8 lines 11-20 “The display 132 can be worn by the user as part of a headset such that a user may wear the display over their eyes like a pair of goggles or glasses.” Col 10 lines 27-37 “In some implementations, the new content is automatically pushed to the user device 106 and the mobile device 108 as it becomes available. In some implementations, the VR application 130 and the VR controller application 140 can be configured to request approval to display new content. In response to receiving, by the user device 106 and/or the mobile device 108, a user input requesting the new content, the user device 106 and the mobile device 108 can retrieve, from the server system 104, a new content item (auxiliary data 120) that is available for virtual display by the user device 106 and the mobile device 108”); a video/audio reality mapper transmission component executable by the processor and configured to: receive the generated 3D virtual space environment and execute based on receiving new video/audio transmission via checking for video/audio transmission data, and to provide video/audio data to the 3D virtual space environment; (Ponce Col 10 lines 27-37 “In some implementations, the new content is automatically pushed to the user device 106 and the mobile device 108 as it becomes available.” Col 20 lines 16-17 and lines 36-38 “Color images are frames of two-dimensional (2-D) images or videos captured by a camera…..The server system receives as input each of a plurality of color images in the stream of color images. The server system may process each color image in the stream of color images”. The new content/images are checked for their availability and sends to the user device when it is available ); Even though Ponce as modified by Bouse and Loeb teaches a virtual and physical item sync component (integral part of processor 123 of Fig. 1) executable by the processor and configured to: receive the customized POI space and execute based on receiving new virtual and physical item data (Ponce Col 10 lines 27-37 “In some implementations, the new content is automatically pushed to the user device 106 and the mobile device 108 as it becomes available. In some implementations, the VR application 130 and the VR controller application 140 can be configured to request approval to display new content. In response to receiving, by the user device 106 and/or the mobile device 108, a user input requesting the new content, the user device 106 and the mobile device 108 can retrieve, from the server system 104, a new content item (auxiliary data 120) that is available for virtual display by the user device 106 and the mobile device 108” Here “new content” can include physical or virtual content,) but is silent about further configured to receive data from at least one of: a mail service transaction sync component, executable by the processor and configured to execute based on receiving new mail and third party data that comprise mail data based on checking for mail and third party data, and to provide mail transaction data to the virtual and physical item sync component; and a third party transaction sync component, executable by the processor and configured to execute based on receiving new mail and third party data that comprise third party data based on checking for mail and third party data, and to provide third party transaction data to the virtual and physical item sync component; Branson teaches virtual and physical item sync component (integral part of processor 122) is further based on receiving data from at least one of: a mail transaction sync component, wherein the mail transaction sync component is executed based on receiving new mail and third party data that comprise mail data based on checking for mail and third party data (Col 5 lines 2-17 “. Furthermore, embodiments of the invention may synchronize the virtual item with the real-world eBook reader 106, such that the virtual item in the virtual world may reflect any changes to the physical eBook reader 106 existing in the real world…..Upon receiving the state change information (i.e., the page turn) from the eBook reader 106, the client system 120 may replicate this change with the virtual world system 160 using the network 140. The virtual world server may then update the virtual item in the virtual world to reflect the page turn”); Branson and Ponce modified by Bouse and Loeb are analogous as both of them are related to data processing. Therefore it would have been obvious for an ordinary skilled person in the art before the effective filing date of claimed invention to have modified Ponce modified by Bouse and Loeb by having the virtual and physical item sync component is further based on receiving data from at least one of: a mail service transaction sync component, executable by the processor and configured to execute based on receiving new mail and third party data that comprise mail data based on checking for mail and third party data, and to provide mail transaction data to the virtual and physical item sync component; and a third party transaction sync component, executable by the processor and configured to execute based on receiving new mail and third party data that comprise third party data based on checking for mail and third party data, and to provide third party transaction data to the virtual and physical item sync component similar to having virtual and physical item sync component is further based on receiving data from at least one of: a mail transaction sync component, wherein the mail transaction sync component is executed based on receiving new mail and third party data that comprise mail data based on checking for mail and third party data as taught by Branson and use that functionality with Ponce modified by Loeb’s processor. The motivation for the above is to keep consistency between real and virtual world. Even though Ponce modified by Loeb and Branson teaches the persona embodiment of physical reality component comprises a virtual avatar (Ponce Col 3 lines 2-3 “ For example, a virtual agent of an interactive presentation of a property can sit on one of the chairs”) but is silent about a persona embodiment of physical reality component executable by the processor and configured to receive the generated 3D virtual space environment and execute based on whether a user setting for persona embodiment is enabled via checking for embodiment notifications, and comprising a virtual avatar component configured to: receive motion tracking sensor data from a VR headset or smartphone, receive user preference settings from the customization component, mirror real-world behaviors by replicating user movement detected by said sensors in the avatar's actions within the virtual environment, and provide avatar state data to the 3D virtual space environment; Borchetta teaches a persona embodiment of physical reality component, wherein the persona embodiment of physical reality component is executed based on if a user setting for persona embodiment is enabled via checking for embodiment notifications (“[0083] In some embodiments, a data file (e.g., an image file and/or file storing the choices made by the user in customizing the avatar) may be stored in association with the user. For example, an identifier for uniquely identifying the avatar created for the user based on the choices made by the user may be generated and stored in association with the file and stored in a user account data record of the user. In some embodiments, the avatar created for the user based on the choices made by the user via menu 410 may be utilized or included in various marketing content for a campaign the user is participating in (e.g., in e-mail messages sent on behalf of the user”); comprising a virtual avatar component configured to: receive motion tracking sensor data from a VR headset or smartphone, receive user preference settings from the customization component, mirror real-world behaviors by replicating user movement detected by said sensors in the avatar's actions within the virtual environment, and provide avatar state data to the 3D virtual space environment (“[0034]…. For example, a User Device 120 may comprise a personal computer such as a desktop, laptop or tablet computer, a cellular telephone or a smartphone or other mobile device. [0041]… An input device may communicate with….. a video camera, a motion detector, a digital camera. [0078] Referring now to FIG. 4, illustrated therein is an example user interface 400 which may be output to a user for facilitating the generation of a customized avatar to represent a user. The term “avatar” as used herein unless indicated otherwise, may refer to a graphical representation, likeness or caricature of a user. [0083] In some embodiments, a data file (e.g., an image file and/or file storing the choices made by the user in customizing the avatar) may be stored in association with the user. For example, an identifier for uniquely identifying the avatar created for the user based on the choices made by the user . [0110]….. (ii) creating a first personalized graphical representation of the first user based on information received from the first user, thereby creating a first user graphical representation; [0116]….. In some embodiments, a user may be provided with software tools for modifying a default avatar's appearance by selecting options or values from a menu. In other embodiments, a user may be provided with software tools for uploading one or more images or graphics for use in customizing an avatar (e.g., an image of the user's face, which may be rendered into a caricature or other derivative representation)”); Borchetta and Ponce modified by Bouse, Loeb and Branson are analogous as both of them are related to data processing. Therefore it would have been obvious for an ordinary skilled person in the art before the effective filing date of claimed invention to have modified Ponce modified by Bouse, Loeb and Branson by a persona embodiment of physical reality component, wherein the persona embodiment of physical reality component is executed based on if a user setting for persona embodiment is enabled via checking for embodiment notifications and comprising a virtual avatar component configured to: receive motion tracking sensor data from a VR headset or smartphone, receive user preference settings from the customization component, mirror real-world behaviors by replicating user movement detected by said sensors in the avatar's actions within the virtual environment, and provide avatar state data to the 3D virtual space environment as taught by Borchetta and use that functionality with Ponce’s processor. The motivation for the above is to customize avatar creation. Ponce modified by Bouse, Loeb and Branson and Borchetta teaches an environment update component executable by the processor and configured to: receive the customized POI space, group virtual aggregation data, and data from the IoT sync component, augmented reality sync component, video/audio reality mapper transmission component, virtual and physical item sync component, mail service transaction sync component, third party transaction sync component, and persona embodiment of physical reality component; (After including all the teaching now Ponce ha all the component and uses Bouse teaching for aggregation Bouse “[0021] Aggregated data is analyzed to determine at least one synergy or at least one substantially common preference amongst a plurality of Users, and then enables the Provisioner to use the aggregated data to customize at least one of an interaction and a transaction between the Provisioner and User”); wherein the set of components are configured to be continuously executed in real-time in a distributed, multi-modal synchronization framework, thereby syncing the functions of a set of devices present in a virtual world environment with a set of devices present in a physical world environment based on parallel, user-specific data streams from said components, wherein each component provides data specific to the user and is processed concurrently to enable real-time, bidirectional synchronization across domains (Branson Col 2 lines Col 16-20 “Embodiments of the invention may receive a request to create a virtual item in a virtual world, based on a real-world item. The created virtual item can then be synchronized with the real-world item. The synchronization may be unidirectional or bidirectional”. Ponce Col 8 lines 26-32 “To provide accurate interactions between VR contents and the see-through reality on the display 132, the VR system 100 may substantially continuously process sensor data acquired by sensors 128 at a high frequency rate (e.g., higher than 100 Hz) configured to generate the environment reconstruction 136 of the physical world around the user accessing the user device 106”. Col 26 lines 50-56 “Similarly, while operations are depicted in the drawings and recited in the claims in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing can be advantageous”. Ponce teaches multiple tasks/components as shown above. Some of the tasks/components are included with Ponce Based on other secondary references. Ponce supports multitasking and parallel processing. So based on Ponce’s teaching all the components will be running concurrently as according to Ponce it is advantageous and to get better outcome). Response to Arguments Applicant's arguments see remarks filed 12/10/2025 with respect to 112(a) rejection have been fully considered and are not persuasive. The rejection has been maintained. Applicant argues, see remarks Page 11, the following paragraph provides the support of Virtual Avatar Mirroring Real-World Behaviors Based on Physical Sensor Data : "VR headsets may communicate with the system via a cable or wirelessly and include motion tracking sensors to track user movement, thus enabling a 360-degree world. VR headsets may also connect to smartphones which now provide an even more real-world experience using the smartphone's motion sensors and other built in sensors in conjunction with the VR headset." Examiner replies that above paragraph doesn’t provide support of Virtual Avatar Mirroring Real-World Behaviors Based on Physical Sensor Data. It just indicates VR headset captures real world experience but doesn’t indicates VR headset mirrors Real-World Behaviors based on physical Sensor Data. Applicant argues see remarks page 11 that the following paragraph "A Persona Creator may be utilized to create the Character by combining the Persona, Language Module, and Avatar. In one embodiment, ……….. Avatar Creator that generates the 3D avatar using procedural generation, and an AI Personifier which adds personality to the character by using ML algorithm for emotion mimicking." provide support of mirroring sensor data. Examiner replies that the above paragraph doesn’t provide support mirroring of sensor data. Applicant argues page 12 that support of mirroring sensor data: "The system may further be configured to execute a Persona Embodiment of Physical Reality if the user setting for persona embodiment is enabled and while continuously checking to see if the system has received embodiment notifications.” Examiner replies, the quoted paragraph by applicant doesn’t provide support mirroring of sensor data. Applicant argues, see remarks Page 12, “FIG. 17 in the drawings shows the Persona Embodiment of Physical Reality component, including the Persona Creator, Avatar Creator, and AI Personifier, which together enable the avatar to reflect real-world user actions and preferences.” Examiner replies Fig. 17 doesn’t provide any indication that sensor data of VR headset is mirrored in Persona Embodiment of Physical Reality. Applicant argues, see remarks Page 12, “Accordingly, the specification describes how motion tracking sensors in VR headsets and smartphones provide real-world movement data, which is used by the system to generate and control avatars in the virtual environment. The Persona Creator and Avatar Creator modules, as shown in FIG. 17, are explicitly designed to combine sensor data and user preferences to create avatars whose behaviors mirror those of the user in the physical world. The AI Personifier further enables the avatar to mimic user emotions and actions, supporting the claimed limitation of "a virtual avatar configured to mirror real-world behaviors based on physical sensor data and user preference settings." Examiner replies, based on analysis of applicant provided paragraphs from Specification, Specification doesn’t have support of mirroring real-world behaviors based on sensor data. Applicant argues, see remarks Page 12, the following paragraphs provide support for syncing based on user-specific fata: “"The reality mapper system components function as components of a reality mapper system that may be used in conjunction with each other and executed in parallel." ……"The system may be configured to execute continuously and in the background, a check for IoT input augmented reality transmission data video/audio transmission data embodiment notifications virtual and physical item data mail/third party data.”. Examiner replies, none of the applicant provided paragraphs from Specification provides the support of syncing based on user-specific data streams. None of the paragraph tells about syncing based on user-specific data streams. Applicant argues, see remarks Page 12, “FIG. 2 in the drawings depicts the flow of component execution and data transmission, showing how the system continuously checks for and processes data streams from multiple domains.” Examiner replies, Fig.2 doesn’t provide the support of syncing based on user-specific data streams. Applicant's arguments see remarks filed 12/10/2025 with respect to rejection of claims 1 and 12 under 35 USC 103 have been fully considered but they are not persuasive. Therefore the rejection has been maintained. Applicant argues, see remarks Page 16, “The Ponce reference, as cited in Abstract, Figs. 1-2, Col. 2-4, Claims 1-18, teaches visualization and modification of 3D environments in VR, with user-customizable features and auxiliary data overlays. However, there is no teaching or suggestion of: Modular, parallel sync components for IoT, AR, video/audio, mail, transaction, and persona, each operating in real time and in a distributed architecture; A virtual avatar that mirrors real-world user behaviors based on motion tracking sensor data from a VR headset or smartphone; or Bidirectional, real-time synchronization of device functions between physical and virtual environments based on user-specific data streams. The Applicant asserts that Ponce's system is focused on rendering and annotating 3D environments, not on synchronizing physical and virtual device states or mirroring user behavior in avatars. That is, Ponce lacks the claimed distributed, multi-modal, real-time synchronization framework, modular sync components, and sensor-driven avatar mirroring.” Examiner replies, Ponce Col 8 lines 26-32 and Col 26 lines 50-56 teaches, updating the generated 3D environment to customize the 3D virtual space for a user based on the received data from at least one of the components from the set of components, and wherein the set of components are being continuously executed in real-time in a distributed, multi-modal synchronization framework thereby syncing the functions of a set of devices present in a virtual world environment with a set of devices present in a physical world environment based on parallel, user-specific data streams from said components, wherein each component provides data specific to the user and is processed concurrently to enable real-time synchronization across domains. See Ponce Col 8 lines 26-32 “To provide accurate interactions between VR contents and the see-through reality on the display 132, the VR system 100 may substantially continuously process sensor data acquired by sensors 128 at a high frequency rate (e.g., higher than 100 Hz) configured to generate the environment reconstruction 136 of the physical world around the user accessing the user device 106”. Here Ponce collect data from different sensor from the real world and generates virtual image based on data from different sensors. The data from different sensors are continuously processed means sensor data are dynamically collected and data from different sensors are used in parallel to generate the environment reconstruction. Additionally Ponce Col 26 lines 50-56 discloses, “Similarly, while operations are depicted in the drawings and recited in the claims in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing can be advantageous”. Ponce teaches multiple tasks/components as shown above. Some of the tasks/components are included with Ponce Based on other secondary references. Ponce supports multitasking and parallel processing. So based on Ponce’s teaching all the components will be running concurrently as according to Ponce it is advantageous and to get better outcome. Therefore applicant’s argument is not persuasive. In response to applicant’s argument regarding Loeb, Branson and Borchetta, see remarks Pages 16-17, examiner wants to note that these references are not used to reject the argued limitation and therefore the argument is moot. In response to applicant’s arguments for claim 9, see remarks Page 18, examiner refers applicant to the reply given above for independent claim 1 as there is no additional argument for the dependent claim. In response to applicant’s arguments for claim 13-15, see remarks Page 19, examiner refers applicant to the reply given above for independent claim 1 as there is no additional argument for the dependent claims. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to SAPTARSHI MAZUMDER whose telephone number is (571)270-3454. The examiner can normally be reached 8 am-4 pm PST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Said Broome can be reached at (571)272-2931. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SAPTARSHI MAZUMDER/ Primary Examiner, Art Unit 2612
Read full office action

Prosecution Timeline

May 25, 2023
Application Filed
Mar 07, 2025
Non-Final Rejection — §103, §112
Jul 14, 2025
Response Filed
Sep 06, 2025
Final Rejection — §103, §112
Dec 10, 2025
Request for Continued Examination
Dec 18, 2025
Response after Non-Final Action
Jan 23, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597211
GENERATING VARIANTS OF VIRTUAL OBJECTS BASED ON ADJUSTABLE EXTERNAL FACTORS
2y 5m to grant Granted Apr 07, 2026
Patent 12586316
METHOD FOR MIRRORING 3D OBJECTS TO LIGHT FIELD DISPLAYS
2y 5m to grant Granted Mar 24, 2026
Patent 12582488
USER INTERFACE FOR CONNECTING MODEL STRUCTURES AND ASSOCIATED SYSTEMS AND METHODS
2y 5m to grant Granted Mar 24, 2026
Patent 12579745
Curvature-Guided Inter-Patch 3D Inpainting for Dynamic Mesh Coding
2y 5m to grant Granted Mar 17, 2026
Patent 12567210
Multipath Artifact Avoidance in Mobile Dimensioning
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
64%
Grant Probability
76%
With Interview (+11.8%)
2y 8m
Median Time to Grant
High
PTA Risk
Based on 375 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month