DETAILED ACTION
This Office Action is in response to Applicants application filing received on November 25, 2024. Claim(s) 1-20 is/are currently pending in the instant application. The application is a Continuation of U.S. application 17/370,272 filed on July 8, 2021, now U.S. Patent 12,190,272.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The Examiner acknowledges the Applicants filing of IDS references on November 25, 2024. The references have been considered at this time. A copy of the annotated IDS sheet is included in this correspondence.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-20 of U.S. Patent No. 12,190,272. Although the claims at issue are not identical, they are not patentably distinct from each other because they are claiming the same invention.
Claim 1 can be drawn to claim 1 of U.S. Patent 12,190,272; specifically A device, comprising: a processing system including a processor; and a memory that stores executable instructions that, when executed by the processing system, facilitate performance of operations, the operations comprising: obtaining a first group of image content of an equipment site; aggregating the first group of image content resulting in first aggregated image content; generating first virtual reality image content based on the first aggregated image content; generating a communication session among a group of communication devices; providing, over a communication network, the first virtual reality image content to the group of communication devices during the communication session, wherein each of the group of communication devices renders the first virtual reality image content on a respective graphical user interface of each of the group of communication devices, wherein the graphical user interface receives user-generated input associated with each user associated with each of the group of communication devices to interact with the first virtual reality image content; receiving, over the communication network, first user-generated input from a first communication device of the group of communication devices; identifying first equipment to install at the equipment site according to the first user-generated input; adjusting the first virtual reality image content by incorporating an image of the first equipment resulting in a second virtual reality image content; receiving, over the communication network, second user-generated input from a second communication device of the group of communication devices; identifying second equipment to install at the equipment site according to the second user-generated input; adjusting the second virtual reality image content by incorporating an image of the second equipment resulting in a third virtual reality image content; determining that installation of the second equipment does not satisfy an installation threshold resulting in a first determination in response to analyzing a fourth virtual reality image content associated with the installation of the second equipment utilizing a machine learning application; providing, over the communication network to each of the group of communication devices, a first notification indicating the installation of the second equipment does not satisfy the installation threshold based on the first determination; providing, over the communication network to a particular communication device associated with an equipment installer, instructions for adjusting the installation of the second equipment, wherein the instructions for the adjusting are obtained responsive to the providing the first notification; based on identifying an adjusted installation of the second equipment, determining that the adjusted installation satisfies the installation threshold resulting in a second determination, wherein the second determination is in response to analyzing, utilizing the machine learning application, a fifth virtual reality image content associated with the adjusted installation of the second equipment; and providing, over the communication network to each of the group of communication devices, a second notification indicating the adjusted installation of the second equipment satisfies the installation threshold based on the second determination, thereby facilitating collaborative design of the equipment site between the equipment installer and a plurality of users remote from the equipment site.
Claim 2 can be drawn to claim 2 of U.S. Patent 12,190,272.
Claim 3 can be drawn to claim 3 of U.S. Patent 12,190,272.
Claim 4 can be drawn to claim 4 of U.S. Patent 12,190,272.
Claim 5 can be drawn to claim 5 of U.S. Patent 12,190,272.
Claim 6 can be drawn to claim 6 of U.S. Patent 12,190,272.
Claim 7 can be drawn to claim 7 of U.S. Patent 12,190,272.
Claim 8 can be drawn to claim 8 of U.S. Patent 12,190,272.
Claim 9 can be drawn to claim 9 of U.S. Patent 12,190,272.
Claim 10 can be drawn to claim 10 of U.S. Patent 12,190,272; A non-transitory, machine-readable medium, comprising executable instructions that, when executed by a processing system including a processor, facilitate performance of operations, the operations comprising: obtaining a first group of image content of an equipment site; aggregating the first group of image content resulting in first aggregated image content; generating first virtual reality image content based on the first aggregated image content; generating a communication session among a group of communication devices; providing, over a communication network, the first virtual reality image content to the group of communication devices during the communication session, wherein each of the group of communication devices renders the first virtual reality image content on a respective graphical user interface of each of the group of communication devices, wherein the graphical user interface receives user-generated input associated with each user associated with each of the group of communication devices to interact with the first virtual reality image content; receiving, over the communication network, first user-generated input from a first communication device of the group of communication devices; identifying first equipment to install at the equipment site according to the first user-generated input; determining a first location associated with the first equipment within the equipment site according to the first user-generated input; adjusting the first virtual reality image content by incorporating an image of the first equipment according to the first location resulting in a second virtual reality image content; receiving, over the communication network, second user-generated input from a second communication device of the group of communication devices; identifying second equipment to install at the equipment site according to the second user-generated input; determining a second location associated with the second equipment within the equipment site according to the second user-generated input; adjusting the second virtual reality image content by incorporating an image of the second equipment according to the second location resulting in a third virtual reality image content; determining that installation of the second equipment at the second location does not satisfy an installation threshold resulting in a first determination in response to analyzing a fourth virtual reality image content associated with the installation of the second equipment utilizing a machine learning application; providing, over the communication network to each of the group of communication devices, a first notification indicating the installation of the second equipment does not satisfy the installation threshold based on the first determination; providing, over the communication network to a particular communication device associated with an equipment installer, instructions for adjusting the installation of the second equipment, wherein the instructions for the adjusting are obtained responsive to the providing the first notification; based on identifying an adjusted installation of the second equipment, determining that the adjusted installation satisfies the installation threshold resulting in a second determination, wherein the second determination is in response to analyzing, utilizing the machine learning application, a fifth virtual reality image content associated with the adjusted installation of the second equipment; and providing, over the communication network to each of the group of communication devices, a second notification indicating the adjusted installation of the second equipment satisfies the installation threshold based on the second determination, thereby facilitating collaborative design of the equipment site between the equipment installer and a plurality of users remote from the equipment site.
Claim 11 can be drawn to claim 2 of U.S. Patent 12,190,272.
Claim 12 can be drawn to claim 3 of U.S. Patent 12,190,272.
Claim 13 can be drawn to claim 4 of U.S. Patent 12,190,272.
Claim 14 can be drawn to claim 5 of U.S. Patent 12,190,272.
Claim 15 can be drawn to claim 6 of U.S. Patent 12,190,272.
Claim 16 can be drawn to claim 7 of U.S. Patent 12,190,272.
Claim 17 can be drawn to claim 8 of U.S. Patent 12,190,272.
Claim 18 can be drawn to claim 9 of U.S. Patent 12,190,272.
Claim 19 can be drawn to claims 20 and 1 of U.S. Patent 12,190,272; A method, comprising: obtaining, by a processing system including a processor, a first group of image content of an equipment site; aggregating, by the processing system, the first group of image content resulting in first aggregated image content; generating, by the processing system, first virtual reality image content based on the first aggregated image content; generating, by the processing system, a communication session among a group of communication devices; providing, by the processing system, over a communication network, the first virtual reality image content to the group of communication devices during the communication session, wherein each of the group of communication devices renders the first virtual reality image content on a respective graphical user interface of each of the group of communication devices, wherein the graphical user interface receives user-generated input associated with each user associated with each of the group of communication devices to interact with the first virtual reality image content; receiving, by the processing system, over the communication network, a group of user-generated input from a portion of the group of communication devices; identifying, by the processing system, a group of equipment to install at the equipment site according to the group of user-generated input; adjusting, by the processing system, the first virtual reality image content by incorporating images of the group of equipment resulting in a second virtual reality image content; providing, by the processing system, over the communication network, the second virtual reality image content to the group of communication devices, wherein each of the group of communication devices renders the second virtual reality image content on the respective graphical user interface of each of the group of communication devices; obtaining, by the processing system, over the communication network, a second group of image content of the equipment site from a third communication device associated with an installer of equipment associated with the equipment site; aggregating, by the processing system, the second group of image content resulting in a third virtual reality image content, wherein the third virtual reality image content shows a completed installation of the equipment at the equipment site; determining, by the processing system, the completed installation of the equipment does not satisfy an installation threshold resulting in a determination in response to analyzing, by the processing system, the third virtual reality image content utilizing a machine learning application; providing, by the processing system, over the communication network to each of the group of communication devices, a notification indicating that the completed installation of the equipment does not satisfy the installation threshold based on the determination; providing, by the processing system, over the communication network to the third communication device associated with the installer, instructions for adjusting the completed installation of the equipment, wherein the instructions for the adjusting are obtained responsive to the providing the notification; based on identifying an adjusted installation of the equipment, determining, by the processing system, that the adjusted installation satisfies the installation threshold resulting in a second determination, wherein the second determination is in response to analyzing, utilizing the machine learning application, a fourth virtual reality image content associated with the adjusted installation of the equipment; and providing, by the processing system, over the communication network to each of the group of communication devices, a second notification indicating the adjusted installation of the equipment satisfies the installation threshold based on the second determination, thereby facilitating collaborative design of the equipment site between the installer and a plurality of users remote from the equipment site.
AND adjusting the second virtual reality image content by incorporating an image of the second equipment resulting in a third virtual reality image content; determining that installation of the second equipment does not satisfy an installation threshold resulting in a first determination in response to analyzing a fourth virtual reality image content associated with the installation of the second equipment utilizing a machine learning application; providing, over the communication network to each of the group of communication devices, a first notification indicating the installation of the second equipment does not satisfy the installation threshold based on the first determination.
Claim 20 can be drawn to claim 9 of U.S. Patent 12,190,272.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. The cited prior art generally refers to virtual or augmented reality for display of or suggestions related to rooms, appliances, areas, and other items including associated methods and systems.
U.S. Publication 2014/0282220 A1 - Embodiments are disclosed herein that relate to displaying information from search results and other sets of information as augmented reality images. For example, one disclosed embodiment provides a method of presenting information via a computing device comprising a camera and a display. The method includes displaying a representation of each of one or more items of information of a set of electronically accessible items of information. The method further comprises receiving a user input requesting display of a selected item of the set of electronically accessible items of information, obtaining an image of a physical scene, and displaying the image of the physical scene and the selected item together on the display as an augmented reality image.
U.S. Publication 2016/0026242 A1 - A head mounted display (HMD) device operating in a real world physical environment is configured with a sensor package that enables determination of an intersection of a device user's projected gaze with a location in a virtual reality environment so that virtual objects can be placed into the environment with high precision. Surface reconstruction of the physical environment can be applied using data from the sensor package to determine the user's view position in the virtual world. A gaze ray originating from the view position is projected outward and a cursor or similar indicator is rendered on the HMD display at the ray's closest intersection with the virtual world such as a virtual object, floor/ground, etc. In response to user input, such as a gesture, voice interaction, or control manipulation, a virtual object is placed at the point of intersection between the projected gaze ray and the virtual reality environment.
U.S. Publication 2020/0005538 A1 - Apparatus and associated methods relate to immersive collaboration based on configuring a real scene VRE operable from a real scene and a remote VRE operable remote from the real scene with an MR scene model of the real scene, creating an MR scene in each of the real scene VRE and remote VRE based on augmenting the MR scene model with an object model, calibrating the remote MR scene to correspond in three-dimensional space with the real scene MR scene model, and automatically providing immersive collaboration based on the MR scene in the remote VRE and updating the real scene VRE with changes to the remote VRE. In an illustrative example, the MR scene model of the real scene may be determined as a function of sensor data scanned from the real scene. In some embodiments, the MR scene model may be augmented with an object model identified from the real scene. The object model identified from the real scene may be, for example, selected from a known object set based on matching sensor data scanned from the real scene with an object from a known object set. In some embodiments, the remote MR scene may be calibrated based on applying a three-dimensional transform calculated as a function of the real MR scene and remote MR scene geometries. Some designs may recreate a subset of the real scene in the remote VRE and update the real scene VRE with changes to the remote VRE. Various embodiments may advantageously provide seamless multimedia collaboration based on updates to the remote VRE in response to physical changes to the real scene, and updating the real scene VRE in response to changes in the remote VRE.
U.S. Patent 11,593,538 B2 - A device receives an infrastructure design document that represents a network infrastructure design. The device causes the infrastructure design document to be displayed via a first interface of a geographic information system (GIS) tool that is to be used during an inspection of a site, where the inspection includes inspecting structural components that are to support equipment of a network. The device receives, from a user device, feedback data that is based on the inspection. The device causes the feedback data to be integrated into the GIS tool. The device receives, from another user device, instructions that are to be used to update the infrastructure design document. The device updates the infrastructure design document based on the set of instructions and performs actions that allow the infrastructure design document to be used when implementing the network infrastructure design.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DYLAN C WHITE whose telephone number is (571)272-1406. The examiner can normally be reached M-F 7:30-4:00 EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Beth Boswell can be reached at (571)272-6737. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DYLAN C WHITE/Primary Examiner, Art Unit 3625 March 5, 2026