DETAILED ACTION
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 5, 12, and 13 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Regarding claims 5 and 12, the phrase "possibly" renders the claim indefinite because it is unclear whether the limitations following the phrase are part of the claimed invention. See MPEP § 2173.05(d).
Regarding claim 13, the phrase "Use of an arrangement" renders the claim indefinite because it is unclear whether the limitation(s) following the phrase are part of the claimed invention. See MPEP § 2173.05(d).
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1, 3, 5, 7-11, 13 are rejected under 35 U.S.C. 102(a)(1) or 102(a)(2) as being anticipated by Kritzler (US 20190097896 A1).
As per claim 1, Kritzler teaches the claimed:
1. An arrangement, comprising
a system, defined by a plurality of electronic devices being operationally interconnected and determining a configuration of the system,
(Kritzler [0009]: “According to an embodiment the real-world object may be a computer rack where a first configuration of the computer rack is a placement of a plurality of computer components in the computer rack in a first specific order.”
Kritzler teaches the real-world object that can be a computer rack with a plurality of computer components, this corresponds to the system that is defined by a plurality of electronic devices that are operationally interconnected. Kritzler also teaches of a specific order these computer components must be in, which is the configuration of the system.)
a marker acting as a unique digital system identifier for identifying the system, and
(Kritzler [0041]: “In an embodiment, markers may be used on one or more of the components. This would allow the AR application to identify each component in an object. The use of markers may provide the advantage of correctly identifying components.”)
a mobile device comprising
(Kritzler [0017]: “AR applications run on mobile devices.”)
a camera, for scanning a marker associated with the system, said camera having a camera view,
(Kritzler [0017]: “Available mobile devices generally include at least one camera and a display. … Tracking requires the AR application to recognize the real-world object, which is to be augmented, in the camera view.”
Kritzler [0041]: “In an embodiment, markers may be used on one or more of the components. This would allow the AR application to identify each component in an object.”)
a display, and
(Kritzler [0017]: “Available mobile devices generally include at least one camera and a display.”)
a processor, connectable to a memory storing characteristics of the system and able to retrieve characteristics of the system from the memory, and connectable to the plurality of electronic devices,
(Kritzler [0004]: “A system for enabling an augmented reality (AR) application includes a computer processor”
Kritzler [0017]: “To recognize real-world objects, the AR application must have some prior knowledge about the real-world object. Pre-processing involves the recording of point clouds or manipulation of 3D computer aided design (CAD) files associated with a given real-world object. Vision algorithms may take the processed information relating to the real-world object and be applied to a sensed environment or state representation to identify a real-world object contained in the image/scene.”
Kritzker [0022]: “As discussed above, an AR application that was pre-processed or trained to recognize computer rack 101 in configuration (a), would not be able to recognize the computer rack 101 when it is in configuration (b), due to the elimination, addition and rearrangement of the objects contained in the computer rack 101.”
Kritzler teaches that the AR application receives and recognizes the characteristics of a specific computer rack and not another, thus indicating that it is able to retrieve characteristics of the system from a memory.
Kritzler [0031]: “… portions of the virtual overlay may act as buttons which initiate actions within the AR or MR application. These buttons can give access to digital information connected to the object.”
Kritzler teaches that the AR application, that displays the virtual overlay of the computer system and components, can provide specific digital information of the components connected to it. Therefore, this indicates that the AR application, that includes a computer processor, is able to connect to the plurality of computer components.)
wherein the mobile device is configured for displaying an augmented and/or mixed reality (AR/MR) interface corresponding to one or more of the plurality of electronic devices of the system, along with information of the system and/or one or more of the plurality of electronic devices, based on said stored and retrieved characteristics and/or obtained from the connectable plurality of electronic devices.
(Kritzler [0008]: “In the AR application, associating the user input with the real-world object and superimposing in a viewable mobile device, augmented reality data over a depiction of the real-world object.”
Kritzler [0021]: “The mobile device 105 includes a camera or other imaging device, which may be directed toward the computer rack 101 to sense a digital image or representation of the computer rack 101.”
Kritzler [0025]: “Mobile device 203 provides user 201 with a representative view 207 of the 3D space 207. The representative view 207 includes a depiction 209 of real-world object 205.”
Kritzler [0031]: “… portions of the virtual overlay may act as buttons which initiate actions within the AR or MR application. These buttons can give access to digital information connected to the object.”
Kritzler [0032]: “For example, when user 201 clicks on the virtual object 401 corresponding to a component object “D”, the computer program may be adapted to generate or lookup from memory technical information relating to the virtual object “D” 401 selected by the user 201. Additional information may be displayed in an overlay panel 403.”
Kritzler teaches the mobile device that displays the depiction of the real-world object, which includes the configuration of the system and digital information, as stated in the passages above. This information was given by the “pre-processing” mentioned in the claimed limitation above for the structure and configuration of the system. In addition to that, the connectable computer components, also mentioned in the claim limitations above, gives digital information of the specific components in the computer rack system, as Kritzler describes.)
As per claim 3, Kritzler teaches the claimed:
3. The arrangement according to claim 1, wherein the marker is provided onto a rack, case, plate or wall, or the marker is provided onto a separate card or token.
(Kritzler [0041]: “In an embodiment, markers may be used on one or more of the components. This would allow the AR application to identify each component in an object. The use of markers may provide the advantage of correctly identifying components.”
Kritzler indicates that the markers can be used on one or more of the components and is used to identify each component, this would indicate that a marker is provided onto the rack, which Kritzler uses as an example embodiment.)
As per claim 5, Kritzler teaches the claimed:
5. The arrangement according to claim 1, wherein the memory is either part of the mobile device or a separate device or a separate system of the arrangement, possibly connected to the cloud.
(Kritzler [0048]: “The computer system 610 may perform a portion or all of the processing steps of embodiments of the invention in response to the processors 620 executing one or more sequences of one or more instructions contained in a memory, such as the system memory 630.” Additionally, FIG. 6 shows the system memory is part of the computer system.)
As per claim 7, Kritzler teaches the claimed:
7. The arrangement according to claim 1, wherein the system is redefined by adding one or more additional electronic devices, and/or removing one or more of the plurality of electronic devices, and/or adapting settings of one or more of the plurality of electronic devices, and herewith new characteristics of the system are defined.
(Kritzler [0020]: “Multiple components may be arranged within a first cabinet or rack. However, a second cabinet rack may include more or fewer components, including additional components compared to the first cabinet or rack. Furthermore, the components contained in the second cabinet or rack may be arranged in a different order with respect to one another.”
Kritzler [0021]: “The mobile device 105 includes a camera or other imaging device, which may be directed toward the computer rack 101 to sense a digital image or representation of the computer rack 101. At a given point in time, user 103 may view an image or perceived view of computer rack 101 in a first view denoted (a), or may have viewed, computer rack 101 in a second, different configuration denoted (b).”
Kritzler [0022]: “… an AR application that was pre-processed or trained to recognize computer rack 101 in configuration (a), would not be able to recognize the computer rack 101 when it is in configuration (b), due to the elimination, addition and rearrangement of the objects contained in the computer rack 101.”
Kritzler teaches the different arrangements and configurations of the computer racks, and how there are different views, indicating new characteristics of the system. Kritzler gives an example of how a pre-processed application trained to recognize a first configuration would not be able to recognize a second configuration, thus giving more evidence of new characteristics to the system.)
As per claim 15, this claim is similar in scope to limitations recited in claim 7, and thus is rejected under the same rationale.
As per claim 8, Kritzler teaches the claimed:
8. The arrangement according to claim 7, wherein said new characteristics of the system are stored in the memory.
(Kritzler [0017]: “To recognize real-world objects, the AR application must have some prior knowledge about the real-world object. Pre-processing involves the recording of point clouds or manipulation of 3D computer aided design (CAD) files associated with a given real-world object. Vision algorithms may take the processed information relating to the real-world object and be applied to a sensed environment or state representation to identify a real-world object contained in the image/scene.”
Kritzker [0022]: “As discussed above, an AR application that was pre-processed or trained to recognize computer rack 101 in configuration (a), would not be able to recognize the computer rack 101 when it is in configuration (b), due to the elimination, addition and rearrangement of the objects contained in the computer rack 101.”
Kritzler teaches that the AR application receives and recognizes the characteristics of a specific computer rack and not another, thus indicating that it is able to retrieve characteristics of the system from a memory. Additionally, Kritzler mentions two different configurations (a and b), this can correspond to the ‘old’ and ‘new’ characteristics of the system, thus the new characteristics are stored in memory after processing the information.)
As per claim 9, Kritzler teaches the claimed:
9. The arrangement according to claim 1, wherein the mobile device is configured for displaying specifications, technical data, status and/or health data of one or more of the plurality of electronic devices.
(Kritzler [0031]: “… the virtual overlay, comprising the virtual outline 215 and the components selected from the possible component objects 301, portions of the virtual overlay may act as buttons which initiate actions within the AR or MR application. These buttons can give access to digital information connected to the object.”
Kritzler [0032]: “For example, when user 201 clicks on the virtual object 401 corresponding to a component object “D”, the computer program may be adapted to generate or lookup from memory technical information relating to the virtual object “D” 401 selected by the user 201. Additional information may be displayed in an overlay panel 403.”)
As per claim 10, Kritzler teaches the claimed:
10. The arrangement according to claim 1, wherein the processor is adapted for triggering displaying, on the display, of the AR/MR interface for one or more of the plurality of electronic devices of the system, whenever said one or more of the plurality of electronic devices is displayed on the display of the mobile device.
(Kritzler [0021]: “The mobile device 105 includes a camera or other imaging device, which may be directed toward the computer rack 101 to sense a digital image or representation of the computer rack 101. At a given point in time, user 103 may view an image or perceived view of computer rack 101”
Kritzler [0025]: “Mobile device 203 provides user 201 with a representative view 207 of the 3D space 207. The representative view 207 includes a depiction 209 of real-world object 205. Depiction 209 is representative of what user 201 views through mobile device 203. Depiction 209 includes objects in view of the user, as well as any additional information in overlays or audio outputs provided by the AR application. Depiction 209 may display a capture or stream of environment captures sensed by the mobile device, such as a tablet computer, or may be a real-time view of the user's surroundings in the case of a mobile device embodied as a wearable device.”
Kritzler teaches the mobile device that provides the user with a representative view that is representative of the computer rack when it is directed toward the real-world object. Kritzler states that the mobile device can sense the real-world object and its surroundings, which then provides display on the AR interface with any additional in overlays or audio outputs.)
As per claim 11, Kritzler teaches the claimed:
11. The arrangement according to claim 1, wherein the layout of the AR/MR interface is adaptable, in particular corresponding information being displayed can be moved around on the display of the mobile device.
(Kritzler [0030]: “Referring now to FIG. 3, the representative view 207 seen by user 201 is updated to display a list of possible component objects 301. The list of possible component objects 301 contains representations of component objects that may be associated with the real-world object 205. Using the list of possible component objects 301, the user 201 builds a desired configuration for a specific real-world object 205. The interface displayed in the representative view 207 displays all the components available to real-world object 205 in a panel corresponding to the list of possible component objects 301. The user 201 may manipulate the cursor 211 to drag and drop 303 a component in the list 301 to one or more positions within the virtual outline 215.”)
As per claim 13, Kritzler teaches the claimed:
13. Use of an arrangement according to claim 1, for providing support, in particular for troubleshooting and/or problem solving operations, related to one or more of the plurality of electronic devices, and/or the interconnection or interoperability between the plurality of electronic devices.
(Kritzler [0023]: “The user may build a customized virtual overlay for real-world objects at run time using spatial properties of the user's environment. In this way, the system may adapt to changing configurations. The described systems may be used for servicing, maintaining, managing inventory or repairing technical equipment, by way of non-limiting example.”)
As per claim 14, this claim is similar in scope to limitations recited in claim 1 and 13, and thus is rejected under the same rationale.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 2 and 4 are rejected under 35 U.S.C. 103 as being unpatentable over Kritzler (US 20190097896 A1) in view of Barros (US 10692289 B2).
As per claim 2, Kritzler alone does not explicitly teach the claimed limitations.
However, Kritzler in combination with Barros teaches the claimed:
2. The arrangement according to claim 1, wherein a physical position of the marker is used to localize the plurality of electronic devices of the system, herewith defining the marker as primary anchor.
(Barros (col 6, line 48-52): “In some implementations, the marker 106A can be considered as an anchor marker. The RDF 122 can define the relative position of the marker 106B based on the detected position of the marker 106A”.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the marker as taught by Barros with the system of Kritzler in order to determine the positions and locations of the objects based on the markers.
As per claim 4, Kritzler alone does not explicitly teach the claimed limitations.
However, Kritzler in combination with Barros teaches the claimed:
4. The arrangement according to claim 1, wherein the marker is a QR-code, NFC or RFID tag or asset tag.
(Barros (col 6, line 64-col 7, line 1): “The markers 106A-D are recognizable by the sensor(s) on the device 110 so that they can be captured accordingly. One or more suitable forms of marker can be used. In some implementations, a code can be used. A code can be a digital code. For example, the marker can include a QR code”.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the QR code as a marker as taught by Barros with the system of Kritzler in order to determine the positions and locations of specific components and provide this information instantly to users.
As per claim 14, the reasons and rationale for the rejection of claim 1 and 13 are incorporated herein. In particular, only additional features unique to claim 14 that were not present in claim 1 and 13 will be explicitly addressed here.
As per claim 14, Kritzler alone does not explicitly teach the claimed limitations.
However, Kritzler in combination with Barros teaches the claimed: (strikethrough taught in claims 1 and 13)
14.
based on a signal generated by scanning the marker retrieving characteristics of the system and/or one or more of the plurality of electronic devices from the memory,
(Barros (col 8, line 34-38): “The signals from the sensor(s) 218 can be interpreted by one or more image analyzers 220. For example, the image analyzer 220 can read a visual symbol that includes a QR code and inform the AR system 214 about the information encoded in the symbol.”)
based on said retrieved information, generating an augmented and/or mixed reality (AR/MR) interface corresponding to the system, and
(Barros (col 9, line 25-29): “As such, the AR system can … recognize the location 118C as being the location where the marker 106C was originally captured, and can present the virtual object 222 in the AR environment 204 based on this recognition.”)
when directing the camera of the mobile device towards one or more of the plurality of electronic devices of the system, displaying on the display of the mobile device an image of the one or more of the plurality of electronic device of the system, and part of the AR/MR interface corresponding to said one or more of the plurality of electronic devices.
(Kritzler [0021]: “The mobile device 105 includes a camera or other imaging device, which may be directed toward the computer rack 101 to sense a digital image or representation of the computer rack 101.”
Kritzler [0025-0026]: “User 201 is in a 3D space (e.g. a room) containing a real-world object 205. By way of example, real-world object 205 may be a computer rack housing a number of varying computer-related components. Mobile device 203 provides user 201 with a representative view 207 of the 3D space 207. … The representative view 207 provides an interface between a computer processor and the user 201. Thus, the display of objects within the representative view 207 may be controlled by a computer processor within mobile device 203 or otherwise in communication with mobile device 203. Accordingly, an AR and/or MR application running on a computer processor may be in communication with the representative view 207.
Additionally, FIG. 2 shows an example of a user looking at a real-world object, in this case a computer rack, through the mobile device, and the device provides a representative view of the computer rack that includes AR/MR overlays.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the markers as taught by Barros with the system of Kritzler in order to generate an AR/MR interface specifically for the component with a marker that is specific to all of the information pertaining to the component.
Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Kritzler (US 20190097896 A1) in view of Salter (US 9501873 B2).
As per claim 6, Kritzler alone does not explicitly teach the claimed limitations.
However, Kritzler in combination with Salter teaches the claimed:
6. The arrangement according to claim 1, wherein at least one of the plurality of electronic devices is not located in the camera view.
(Salter (col 4, line 16-25): “Storage subsystem 324 comprises instructions stored thereon that are executable by logic subsystem 322, for example, to receive and interpret inputs from the sensors, to identify location and movements of a user, to identify real objects in an augmented reality field of view and present augmented reality imagery therefore, to detect objects located outside a field of view of the user, and to present indications of positional information associated with objects located outside the field of view of the user, among other tasks.”
Salter teaches the display system that presents augmented reality imagery of what the user view, in addition to that, it is able to detect and present indications of objects and other components that are not within the view of the user.
Furthermore, as Kritzler [0041] teaches, scanning a marker that is only a part of the component also indicates that at least one of the plurality of electronic devices is not located in the camera view.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the AR system configured to provide info of objects outside of a user’s view as taught by Salter with the system of Kritzler in order to provide an AR/MR system that can assist a user in interacting with complex systems even when not all of the components are visible, which improves spatial awareness and efficiency.
Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Kritzler (US 20190097896 A1) in view of Wright (US 11417091 B2).
As per claim 12, Kritzler alone does not explicitly teach the claimed limitations.
However, Kritzler in combination with Wright teaches the claimed:
12. The arrangement according to claim 1, further comprising means for requesting remote support and/or assistance, possibly of an expert, wherein said means for requesting remote assistance are part of the mobile device, or of another computing device at least connectable to the mobile device, such that the another computing device is able to share a camera view of the mobile device including the AR/MR interface applied thereon, possibly with the expert.
(Wright (col 11, line 52-57): “Aspects of this disclosure may be performed using a number of different electronic devices. One such electronic device that can be used by a local user for a shared augmented reality session is a shared augmented reality device that can be embodied in the form of a “virtual looking glass”.”
Wright (col 12, line 7 - 14): “the device 800 may be configured to trigger a remote assistance session with a helper … The remote helper may be able to see through the camera 810, provide instructions using his/her voice and draw augmented reality annotations that appear to stick to objects within the local user's environment when looking at the objects through the “virtual looking glass” 800.”
Wright (col 12, line 22-23): “Remote help sessions can be run on an existing mobile devices”.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the remote assistance as taught by Wright with the system of Kritzler in order to provide more assistance with real time guidance to enhance operational efficiency and reduce downtime.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOSHUA SUO whose telephone number is (571) 272-8387. The examiner can normally be reached Mon-Fri 8am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Hajnik can be reached on (571) 272-7642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JOSHUA SUO/Examiner, Art Unit 2616
/DANIEL F HAJNIK/Supervisory Patent Examiner, Art Unit 2616