DETAILED ACTION
Status
This Office Action is responsive to claims filed on 07/10/2024. Please note Claims 1-2 are pending and have been examined.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder “module” that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “a static digital twin module”, “an input module”, “an integration module”, “a display module”, and “a navigation module” in claim 1.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1-2 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-2 of copending Application No. 18646820, in view of Badr (US 20190391716 A1).
Instant Application
Application No. 18646820
1. A system for displaying an immersive interactive live digital twin of an indoor area comprising:
1. A system for displaying an immersive live digital twin of an indoor area comprising:
a static digital twin module, wherein the static digital twin module comprises a static digital twin model of the indoor area;
a static digital twin module, wherein the static digital twin module comprises a static digital twin model of the indoor area;
an input module, wherein the input module is configured to receive live feeds of the indoor area from a plurality of cameras located in the indoor area;
an input module, wherein the input module is configured to receive live feeds of the indoor area from a plurality of cameras located in the indoor area;
an integration module, wherein the integration module is configured to stitch the live feeds of the indoor area to the static digital twin model to create the immersive live digital twin;
an integration module, wherein the integration module is configured to stitch the live feeds of the indoor area to the static digital twin model to create the immersive live digital twin;
an interactive area disposed in the digital twin model, wherein the interactive area includes an action button for controlling an interactive component, wherein the live feeds capture the interactive component being controlled by a user when the action button is selected;
a display module for displaying the immersive live digital twin to provide a user with the perception that the user is immersed with the immersive live digital twin; and
a display module for displaying the immersive live digital twin to provide a user with the perception that the user is immersed with the immersive live digital twin; and
a navigation module to navigate to different parts of the immersive live digital twin.
a navigation module to navigate to different parts of the immersive live digital twin.
Claims 1-2 of copending Application No. 18646820 do not disclose “an interactive area disposed in the digital twin model, wherein the interactive area includes an action button for controlling an interactive component, wherein the live feeds capture the interactive component being controlled by a user when the action button is selected”.
However, in the same field of endeavor, Badr discloses an interactive area disposed in the digital twin model, wherein the interactive area includes an action button for controlling an interactive component ([0023] “Systems, methods, and computer program products are described for using a virtual assistant application to identify a smart device (or physical controls for a smart device) based on image data (e.g., a single frame image, continuous video, a stream of images, etc.) and, for each identified smart device, presenting one or more user interface controls for controlling the smart device.”), wherein the live feeds capture the interactive component being controlled by a user when the action button is selected (see Fig. 3, [0023] “For example, a smart device or physical controls for a smart device may be identified (e.g. recognized using object recognition techniques) in image data that represents the viewfinder of the mobile device's camera. In response, user interface controls for controlling the smart device can be presented, e.g., within the viewfinder using augmented reality techniques, such that the user can control the smart device.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have modified the Claims 1-2 of copending Application No. 18646820 with the features of disposing an interactive area including an action button for controlling an interactive component in the digital twin model wherein the live feeds capture the interactive component being controlled by a user when the action button is selected. Doing so could allow real-time control of interactive components.
This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-2 are rejected under 35 U.S.C. 103 as being unpatentable over Madonna (US 20210074062 A1), in view of Muir (US 20230236493 A1) and Badr (US 20190391716 A1).
Regarding Claim 1, Madonna discloses a system for displaying an immersive live digital twin of an indoor area comprising:
a static digital twin module, wherein the static digital twin module comprises a static digital twin model of the indoor area ([0028] “As user herein, the term “virtual room” refers to a digital twin of a physical room that is represented by a depiction of an interior portion of a physical structure or an exterior space associated with a physical structure.”);
an input module, wherein the input module is configured to receive feeds of the indoor area from a plurality of cameras located in the indoor area ([0060] “At step 410, an installer places a 3-D camera at a plurality of positions in the physical room, and captures a plurality of overlapping sets of 2-D images (e.g., 2-D panoramic images) and a 3-D space model (e.g., 3-D mesh).”);
an integration module, wherein the integration module is configured to stitch the feeds of the indoor area to the static digital twin model to create the immersive live digital twin ([0061] “At step 420, the 2-D images (e.g., 2-D panoramic images) and 3-D space model (e.g., 3-D mesh) is imported from the 3-D camera to a stitching application, which may be executed in the cloud or on a local computing device. In one implementation, the stitching application may be the Matterport® cloud-based software package. At step 430, the installer utilizes the stitching application to stitch the 2-D images (e.g., 2-D panoramic images) and 3-D space model (e.g., 3-D mesh) together, to link (i.e. stitch) image data to corresponding locations in the 3-D space model.”);
an interactive area disposed in the digital twin model, wherein the interactive area includes an action button for controlling an interactive component ([0011] “By interacting with (e.g., touching, clicking on, etc.) substantially photo-realistic depictions of the devices within the user-navigable 3-D virtual room, a user may indicate changes to the state of corresponding devices in the physical room. As the state of devices in the physical room is changed, a 3-D graphics engine may dynamically update the appearance of the user-navigable 3-D virtual room to reflect the changes, such that what a user views within the virtual room will mimic their experience within the corresponding physical room.”);
a display module for displaying the immersive live digital twin to provide a user with the perception that the user is immersed with the immersive live digital twin ([0063] “At step 480, the artifact-corrected, tagged, appearance assigned, stitched 2-D images and 3-D space models (now referred to as a virtual room) is exported to the control app for inclusion in a user-navigable 3-D virtual room-based user interface.” [0064] “The virtual room is rendered by the graphics engine of the control app.”); and
a navigation module to navigate to different parts of the immersive live digital twin ([0064] “At step 485, the control app determines whether a virtual camera indicating the user's desired perspective is at a position that corresponds with the position from which one of the of 2-D images (e.g., 2-D panoramic images) was captured. If so, at step 485, the graphics engine of the control app renders the virtual room by using data from the 2-D image (e.g., 2-D panoramic image) captured from that location. If not, at step 495 the graphics engine of the control app blends (e.g., changes alpha channel and render layers) of available 2-D images (e.g., 2-D panoramic images) according to the 3-D space model (e.g., 3-D mesh), and uses the blended data to render the virtual room.”).
Madonna does not expressly disclose the camera feeds are live feeds, and wherein the live feeds capture the interactive component being controlled by a user when the action button is selected.
However, in the same field of endeavor, Muir discloses the input module is configured to receive live feeds of the indoor area from a plurality of cameras ([0103] “FIG. 10 then depicts an example electronics system diagram for a multi-camera capture device 300 of the type of FIG. 1, where the camera channels 120 are arranged in a dodecahedral geometry and directly image to the respective sensors.”) and the integration module is configured to stitch the live feeds of the indoor area ([0103] “Image data can be collected from each of the 11 cameras 320, and directed through an interface input-output module, through a cable or bundle of cables, to a portable computer that can provide image processing, including live image cropping and stitching or tiling, as well as camera and device control.”).
Badr discloses an interactive area disposed in the digital twin model, wherein the interactive area includes an action button for controlling an interactive component ([0023] “Systems, methods, and computer program products are described for using a virtual assistant application to identify a smart device (or physical controls for a smart device) based on image data (e.g., a single frame image, continuous video, a stream of images, etc.) and, for each identified smart device, presenting one or more user interface controls for controlling the smart device.”), wherein the live feeds capture the interactive component being controlled by a user when the action button is selected (see Fig. 3, [0023] “For example, a smart device or physical controls for a smart device may be identified (e.g. recognized using object recognition techniques) in image data that represents the viewfinder of the mobile device's camera. In response, user interface controls for controlling the smart device can be presented, e.g., within the viewfinder using augmented reality techniques, such that the user can control the smart device.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have modified the system of Madonna with the features of receiving and stitching the live feeds from the cameras wherein the live feeds capture the interactive component being controlled by a user when the action button is selected. Doing so could allow real-time processing of data and real-time control of interactive components.
Regarding Claim 2, it recites similar limitations of claim 1 but in a method form. The rationale of claim 1 rejection is applied to reject claim 2.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHONG WU whose telephone number is (571)270-5207. The examiner can normally be reached MON-FRI: 9AM-5PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xiao Wu can be reached at 571-272-7761. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CHONG WU/Primary Examiner, Art Unit 2613