DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, or 365(c) is acknowledged. In addition, acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. 19/016,554, filed on January 10, 2025.
Oath/Declaration
Oath/Declaration as filed on January 10, 2025 is noted by the Examiner.
Claim Objections
Claim 1 is objected to because of the following informalities:
In particular, limitation term “an image-to-image transfer learning algorithm” in fifteenth line of the claim renders the claim indefinite, because the meaning of the coined term “an image-to-image transfer learning algorithm” recited in the fifteenth line of the claim is not apparent in light of the specification. See MPEP § 2173.05(a). Examiner recommends applicant amend the claim 1, without adding new matter, to positively recite in definite terms more clearly what “an image-to-image transfer learning algorithm” actually is. Accordingly, any claim(s) dependent on claim 1 are objected to based on same above reasoning.
Claim 10 is objected to because of the following informalities:
In particular, claim 10 recites limitations “an image-to-image transfer learning algorithm” in fifth and sixth lines of the claim is indefinite, because it is unclear whether the limitation is referring to same image-to-image transfer learning algorithm recited in fifteenth line of the claim, or different image-to-image transfer learning algorithm.
Claim 18 is objected to because of the following informalities:
In particular, limitation term “an image-to-image transfer learning algorithm” in thirteenth line of the claim renders the claim indefinite, because the meaning of the coined term “an image-to-image transfer learning algorithm” recited in the thirteenth line of the claim is not apparent in light of the specification. See MPEP § 2173.05(a). Examiner recommends applicant amend the claim 18, without adding new matter, to positively recite in definite terms more clearly what “an image-to-image transfer learning algorithm” actually is.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP §§ 706.02(l)(1) - 706.02(l)(3) for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp.
Claims 1, 9, 10, 11, 12, 13, and 14 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 11, 9, 10, 12, and 13 of U.S. Patent No. 12,229,333. Although the claims at issue are not identical, they are not patentably distinct from each other because the scope of the independent claim, mentioned above, is substantially the same.
The following is an example for comparing claim 1 of this application and respective claim 11 of U.S. Patent No. 12,229,333:
Instant Application
U.S. Patent No.
12,229,333
Claim 1
Claim 11
A computer-implemented method for visualizing interactions in an extended reality (XR) scene, the computer-implemented method comprising: receiving a first dataset, the first dataset representing the XR scene including at least a technical device; displaying the XR scene on an XR headset or a head-mounted display (HMD); providing a room for a user, wherein the user wears the XR headset or HMD to interact with the XR scene, wherein the XR scene is displayed on the XR headset or HMD, wherein the room includes a set of optical sensors, and wherein the set of optical sensors includes at least one optical sensor at a fixed location relative to the room; detecting, via the set of optical sensors, optical sensor data of the user as a second dataset while the user interacts with the XR scene in the room;
Claim 8
A computer-implemented method for visualizing interactions in an extended reality (XR) scene, the computer- implemented method comprising: receiving a first dataset, the first dataset representing the XR scene including at least a technical device; displaying the XR scene on an XR headset or ahead-mounted display (HMD); providing a room for a user, wherein the user wears the XR headset or HMD to interact with the XR scene, wherein the XR scene is displayed on the XR headset or HMD, wherein the room includes a set of optical sensors, and wherein the set of optical sensors includes at least one optical sensor at a fixed location relative to the room; detecting, via the set of optical sensors, optical sensor data of the user as a second dataset while the user is interacting in the room with the XR scene, wherein the XR scene is displayed on the XR headset or HMD; and fusing the first dataset and the second dataset to generate a third dataset, wherein a trained neural network is used to provide output data based on input data, wherein the input data includes the third dataset and the output data represents a semantic context of the optical sensor data of the user.
pre-processing the first dataset to obtain a pre-processed first dataset, wherein the pre-processing includes applying an image-to-image transfer learning algorithm to the first dataset; and fusing the pre-processed first dataset and the second dataset to generate a third dataset.
Claim 11
The computer-implemented method according to claim 8, further comprising: pre-processing the first dataset before fusion with the second dataset, wherein the pre-processing includes application of an image-to-image transfer learning algorithm to the first dataset, wherein the generating of the third dataset includes fusing the pre-processed first dataset and the second dataset.
Independent claim 1 of the instant application teaches “A computer-implemented method for visualizing interactions in an extended reality (XR) scene, the computer-implemented method comprising: receiving a first dataset, the first dataset representing the XR scene including at least a technical device; displaying the XR scene on an XR headset or a head-mounted display (HMD); providing a room for a user, wherein the user wears the XR headset or HMD to interact with the XR scene, wherein the XR scene is displayed on the XR headset or HMD, wherein the room includes a set of optical sensors, and wherein the set of optical sensors includes at least one optical sensor at a fixed location relative to the room; detecting, via the set of optical sensors, optical sensor data of the user as a second dataset while the user interacts with the XR scene in the room; pre-processing the first dataset to obtain a pre-processed first dataset, wherein the pre-processing includes applying an image-to-image transfer learning algorithm to the first dataset; and fusing the pre-processed first dataset and the second dataset to generate a third dataset”. However, it would have been obvious to one of ordinary skill in the art to remove the further limitation “fusing the first dataset and the second dataset to generate a third dataset, wherein a trained neural network is used to provide output data based on input data, wherein the input data includes the third dataset and the output data represents a semantic context of the optical sensor data of the user” at least since omitting the further limitation does not prevent the method from functioning properly, and the claim is in “comprising” format indicating other elements could be added. Likewise, dependent on claims 9, 10, 11, 12, 13, and 14 are rejected based at least on same above reasoning.
Claim 15, 16, and 17 are rejected on the ground of nonstatutory double patenting as being unpatentable over claim 14, and 15 of U.S. Patent No. 12,229,333. Although the claims at issue are not identical, they are not patentably distinct from each other because the scope of the independent claim, mentioned above, is substantially the same.
The following is an example for comparing claim 15 of this application and respective claim 14 of U.S. Patent No. 12,229,333:
Instant Application
U.S. Patent No.
12,229,333
Claim 15
Claim 14
A computer-implemented method for visualizing interactions in an extended reality (XR) scene, the computer-implemented method comprising: receiving a first dataset, the first dataset representing the XR scene including at least a technical device; displaying the XR scene on an XR headset or a head-mounted display (HMD); providing a room for a user, wherein the user wears the XR headset or HMD to interact with the XR scene, wherein the XR scene is displayed on the XR headset or HMD, wherein the room includes a set of optical sensors, and wherein the set of optical sensors includes at least one optical sensor at a fixed location relative to the room; detecting, via the set of optical sensors, optical sensor data of the user as a second dataset while the user interacts with the XR scene in the room; and fusing the first dataset and the second dataset to generate a third dataset,
Claim 8
A computer-implemented method for visualizing interactions in an extended reality (XR) scene, the computer- implemented method comprising: receiving a first dataset, the first dataset representing the XR scene including at least a technical device; displaying the XR scene on an XR headset or ahead-mounted display (HMD);providing a room for a user, wherein the user wears the XR headset or HMD to interact with the XR scene, wherein the XR scene is displayed on the XR headset or HMD, wherein the room includes a set of optical sensors, and wherein the set of optical sensors includes at least one optical sensor at a fixed location relative to the room; detecting, via the set of optical sensors, optical sensor data of the user as a second dataset while the user is interacting in the room with the XR scene, wherein the XR scene is displayed on the XR headset or HMD; and fusing the first dataset and the second dataset to generate a third dataset, wherein a trained neural network is used to provide output data based on input data, wherein the input data includes the third dataset and the output data represents a semantic context of the optical sensor data of the user.
Claim 11
The computer-implemented method according to claim 8, further comprising: pre-processing the first dataset before fusion with the second dataset, wherein the pre-processing includes application of an image-to-image transfer learning algorithm to the first dataset, wherein the generating of the third dataset includes fusing the pre-processed first dataset and the second dataset.
wherein the fusing includes applying a calibration algorithm, which utilizes at least one registration object deployed in the room, the calibration algorithm uses a set of registration objects, which are provided as real, physical objects in the room and which are provided as displayed virtual objects in the XR scene, and for registration purposes, the real, physical objects are moved to match the displayed virtual objects in the XR scene.
Claim 13
The computer-implemented method according to claim 11, wherein the fusing includes applying a calibration algorithm, which utilizes at least one registration object deployed in the room.
Claim 14
The computer-implemented method according to claim 13, wherein the calibration algorithm uses a set of registration objects, which are provided as real, physical objects in the room and which are provided as displayed virtual objects in the XR scene, wherein for registration purposes, the real, physical objects are moved to match the displayed virtual objects in the XR scene.
Independent claim 15 of the instant application teaches “a computer-implemented method for visualizing interactions in an extended reality (XR) scene, the computer-implemented method comprising: receiving a first dataset, the first dataset representing the XR scene including at least a technical device; displaying the XR scene on an XR headset or a head-mounted display (HMD); providing a room for a user, wherein the user wears the XR headset or HMD to interact with the XR scene, wherein the XR scene is displayed on the XR headset or HMD, wherein the room includes a set of optical sensors, and wherein the set of optical sensors includes at least one optical sensor at a fixed location relative to the room; detecting, via the set of optical sensors, optical sensor data of the user as a second dataset while the user interacts with the XR scene in the room; and fusing the first dataset and the second dataset to generate a third dataset, wherein the fusing includes applying a calibration algorithm, which utilizes at least one registration object deployed in the room, the calibration algorithm uses a set of registration objects, which are provided as real, physical objects in the room and which are provided as displayed virtual objects in the XR scene, and for registration purposes, the real, physical objects are moved to match the displayed virtual objects in the XR scene”. However, it would have been obvious to one of ordinary skill in the art to remove the further limitations “wherein a trained neural network is used to provide output data based on input data, wherein the input data includes the third dataset and the output data represents a semantic context of the optical sensor data of the user; the computer-implemented method according to claim 8, further comprising: pre-processing the first dataset before fusion with the second dataset, wherein the pre-processing includes application of an image-to-image transfer learning algorithm to the first dataset, wherein the generating of the third dataset includes fusing the pre-processed first dataset and the second dataset.” at least since omitting the further limitation does not prevent the method from functioning properly, and the claim is in “comprising” format indicating other elements could be added. Likewise, dependent on claims 16 and 17 are rejected based at least on same above reasoning.
Potentially Allowable Subject Matter
Claim 19 is allowable, because the prior art references of record do not teach the combination of all elements as presently claimed. For example, in regard to claim 19 the prior art of record at least does not expressly teach concept of fusing the first dataset and the second dataset to generate the third dataset, wherein the fusing includes applying a calibration algorithm, which utilizes at least one registration object deployed in the room, the calibration algorithm uses a set of registration objects, which are provided as real, physical objects in the room and which are provided as displayed virtual objects in the XR scene, and for registration purposes, the real, physical objects are moved to match the displayed virtual objects in the XR scene. In addition, claims 1 and 15 would be allowable if rewritten to overcome applicable double patenting rejection(s) and objection(s) indicated above, if any, because for claims 1 and 15 the prior art references of record do not teach the combination of all element limitations as presently claimed. Still in addition, claim 18 would be allowable if rewritten to overcome applicable objections(s) indicated above, because for claim 18 the prior art references of record do not teach the combination of all element limitations as presently claimed. In addition, claims 2-14, and 16-17 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten to overcome applicable double patenting rejection(s) and objection(s) indicated above, if any, because for each of claims 2-14, and 16-17 at least in light of their dependency on their respective independent claim, the prior art references of record do not teach the combination of all element limitations as presently claimed.
The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure and include the following:
Casas, U.S. Patent Application Publication 2016/0191887 A1 (hereinafter Casas) teaches apparatus for displaying a stereoscopic augmented view of a patient from a static or dynamic viewpoint of the surgeon.
Chizeck et al., U.S. Patent Application Publication 2018/0232052 A1 (hereinafter Chizeck) teaches an apparatus for generating virtual environment displays based on a group of sensors.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ABDUL-SAMAD A ADEDIRAN whose telephone number is (571)272-3128. The examiner can normally be reached Monday through Thursday, 8:00 am to 5:00 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amr Awad can be reached on 571-272-7764. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ABDUL-SAMAD A ADEDIRAN/Primary Examiner, Art Unit 2621