Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed has been entered.
Response to Arguments
Claims 1-20 are pending.
Applicant’s arguments with respect to amended claims have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 2, 6-8, 12-14, and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Kaehler, U.S. Patent Application Publication US 20170351909 A1 (hereinafter “Kaehler”), in view of Masters, U.S. Patent Application Publication US20140055488 A1 (hereinafter “Masters”), Lang et al, US 20160285950 A1 (hereinafter “Lang”), Wang et al., US 20130294642 A1 (hereinafter “Wang”), and in further view of Fiala, U.S. Patent Application Publication US 20170249745 A1 (hereinafter “Fiala”).
Regarding claims 1 and 2, Kaehler discloses an augmented-reality device (fig. 2, wearable system, paragraph 37), the augmented-reality device comprising:
a processor (paragraph 41, processor); and a non-transitory memory (paragraph 41, non-volatile memory) storing one or more programs for execution by the processor, the one or more programs including instructions for:
retrieving security information for a persons at respective security checkpoints; and concurrently displaying the security information for the persons on an augmented-reality display, wherein the security information is superimposed on or adjacent to the persons located at respective security checkpoints, the displaying configured to maintain visibility of security environment (paragraph 27, application of augmented reality allow inspector to use virtual information display on augmented reality device to pass traveler through security or take further action, paragraph 79, retrieving security information for person and recognize person or object, fig. 12A, 12B, 13-15, detailed description in 102-172, in particular, fig. 12A, and 12B, recognizing person at security checkpoint and retrieve security information such as identification information of the person, displaying security information for the person such as person’s name by superimposing the mark adjacent to the person, while at the same time allow security officer to maintain visibility of surrounding security environment).
PNG
media_image1.png
704
825
media_image1.png
Greyscale
While discloses application of augmented reality device to security checkpoint environment and recognition and retrieving of information for person at security site, Kaehler does not disclose in particular the security information is retrieved for a plurality of persons, and displayed concurrently for each of the plurality of persons and at least one of a plurality of systems.
In similar field of endeavor of augmented reality device utilized to recognize person of interest, Masters discloses the concept of recognizing plurality of persons and concurrently displaying recognized information of each of the plurality of persons (fig. 8, plurality of person 820, displaying plurality of superimposed tag with recognized information 840, paragraph 47: “user device 800 (which may be a representation of user device 300), displays an augmented reality image 810. The underlying image may be an image captured by camera 304. Augmented reality image 800 may include a number of bodies, such as body 820, representing various individuals. Augmented reality image 800 may also include a number of identification tags, such as identification tag 830, corresponding to each of the bodies.” “In another embodiment, rather than a user selectable icon, all or a portion of the tag information may be displayed over or adjacent to body 820, without requiring any user interaction (e.g., selection of the icon).” ).
Both Kaehler and Masters are related to utilization of augmented reality device to identify person of interest, it would have been obvious to one of ordinary skill in the art at the time of filing, to incorporate the concept of recognizing plurality of person and concurrently displaying superimposed identification information of plurality of person, as disclosed by Masters, to the augmented reality device for security environment of Kaehler, to achieve the predictable result of retrieving security information for a plurality of persons at respective security checkpoints; and concurrently displaying the security information for each of the plurality of persons on an augmented-reality display, wherein the security information is superimposed on or adjacent to each of the plurality of persons located at respective security checkpoints, and to improve the augmented reality device of Kaehler for more versatility in real world environment.
In sum, Kaehler in view of Masters discloses the concept of:
retrieving security information of a plurality of person at a security environment;
concurrently displaying security information for each of the plurality of persons on or adjacent to each of the plurality of persons;
allow user of augmented-reality device to maintain visibility of surrounding environment while displaying information;
Kaehler in view of Masters does not disclose in particular displaying the security information for each of the plurality of persons and at least one of a plurality of systems because the person wherein security information is retrieved for in Kaehler in view of Masters is not associated with a particular system.
In similar field of endeavor, Lang discloses an augmented-reality device retrieving access control information of person with ID/ticket at a particular access control system and displaying access control security information retrieved for a plurality of systems to augmented reality device, while allowing user to maintain visibility of surrounding environment, including the access control system itself.
Paragraph 2, data goggle with head-up display designed to project image in the field of view of the user;
Paragraph 4: Access control systems usually comprise at least one server and at least one access control device which can be connected to the server for the purpose of data communication … access control device records access authorizations and in the event of a valid access authorization, actuates a locking member in the opening sense in order to allow access. ID-based access control systems use the ID of a customer medium, where the ID of the customer medium is read out by access control devices of the access control system and is transmitted to the at least one server, which by means of the ID allows or refuses access via the ID-transmitting access control device;
Paragraph 8, 9, it is object of invention to allow monitor of access control system while maintaining view of customer;
Paragraph 10: a method for monitoring and controlling an access control system is proposed, comprising at least one server and at least one access control device which can be connected to the at least one server for the purpose of data communication, within the framework of which data goggles are used for monitoring and controlling the access control system, which data goggles are connected wirelessly to the at least one server of the access control system and the at least one access control device for the purpose of data communication and receive data in real time from the at least one server and/or the at least one access control device, which enable the monitoring of the access control system, wherein these data can be displayed to the user of the data goggles by means of a display device of the data goggles. The access control system is preferably designed as an ID-based, access control system.
Paragraph 12: “notifications and the available control commands are displayed directly to the user. Such notifications can be notifications about defined events or irregularities in the access control system, for example, about a blocked or defective access control device, about attempted fraud, vandalism or about the detection of invalid access authorizations.”
Paragraph 28: a plurality of access control system may be monitor concurrently.
PNG
media_image2.png
710
1013
media_image2.png
Greyscale
Kaehler in view of Masters disclose retrieving and displaying information in real time concurrently for plurality of persons with ID or other recognizable security features on augmented reality device. Kaehler in view of Masters further discloses the system of Kaehler in view of Masters is may be configured to recognize persons as well as object from surrounding environment (Kaehler, paragraphs 79, 109).
Lang additionally disclose concept of displaying security information retrieved in real time on augmented reality device for people with ID seeking access via particular access control system. It would have been obvious to one of ordinary skill in the art at time of filing to incorporate the technique of Lang of implementing augmented reality device for person associated with access control system, into the device of Kaehler in view of Masters displaying security information retrieved while maintaining visibility of surrounding security environment, such that the system of of Kaehler in view of Masters may be implemented in a security environment with access control systems, and information relating to both security system as well as persons at security systems is displayed, to constitute concurrently displaying the security information for each of the plurality of persons and at least one of a plurality of systems on an augmented-reality display, wherein the security information is superimposed on or adjacent to each of the plurality of persons located at respective security checkpoints, the displaying configured to maintain visibility of the respective security checkpoints.
Kaehler in view of Masters and Lang does not disclose the option of retrieving additional information about one of the plurality of persons in response to a user request;
In similar field of endeavor of facial recognition, Wang discloses the concept of optionally retrieve, in response to a user request, additional information about one of plurality of persons within a video that is recognized via facial recognition, see paragraphs 132, 141: “The method 1700 may include, at 1710, detecting a face (e.g., a facial image) appearing in a frame of digital video data by processing the video file with a facial detection algorithm executing in one or more computers. … The method 1700 may further include, at 1720, configuring at least one user-selectable link to be activated along a track of the face through multiple frames of the video data, wherein the user-selectable link comprises a data address for obtaining additional information about a person identified with the face”, and claim 1: “A method for providing interactive video content, the method comprising: detecting a face appearing in a frame of digital video data by processing the video file with a facial detection algorithm executing in one or more computers; configuring at least one user-selectable link to be activated along a track of the face through multiple frames of the video data, wherein the user-selectable link comprises a data address for obtaining additional information about a person identified with the face; and storing the video data associated with the user-selectable link in a computer memory.”
Kaehler in view of Masters and Lang discloses performing facial recognition on recorded video stream of security environment to recognize person at security environment. Wang additional discloses the concept of providing user-selectable link to provide additional information about recognized person within video. It would have been obvious to one of ordinary skill in the art at time of filing to incorporate the technique of Wang of providing user option of retrieving additional information about one of plurality of person recognized, into the device of Kaehler in view of Masters and Lang processing video captured in security environment, such that the system of Kaehler in view of Masters and Lang additional provide the user of system ability to retrieving additional information about one of the plurality of persons in response to a user request, such is incorporation of a known concept into a known device to yield predictable result, the result would have been predictable and would allow user of augmented-reality device more versatility in controlling display of information of person recognized at security environment.
Kaehler in view of Masters, Lang, and Wang does not disclose in particular (from claim 1) wherein at least a portion of the augmented-reality display is determined using one or more fiduciary markers; and
(from claim 2) wherein one or more fiduciary markers are disposed at or adjacent to respective security checkpoints, the fiduciary markers configured to determine location information for an augmented-reality display.
In similar field of endeavor of augmented reality display, Fiala disclose the concept of using fiduciary markers to change at least a portion of the augmented-reality display: adding fiduciary marker to object or scene to aid camera to find correspondence between camera images and provide overlaying virtual graphic in augmented-reality display (paragraph 2, “Marker patterns can be added to objects or scenes to allow automatic systems to find correspondence between points in the world and points in camera images, and to find correspondences between points in one camera image and points in another camera image. The former has application in positioning, robotics, and augmented reality applications, the latter has application in automatic computer modeling to provide the coordinates of world points for applications of the former. Furthermore, marker patterns can be used to contain information relating to various products. For example, marker patterns printed out and mounted on a piece of equipment would allow an augmented reality system to aid a person constructing or servicing this equipment by overlaying virtual graphics with instructions over their view (known as “Augmented Reality”), with the aid of an image sensor (light capturing device such as camera, video camera, digital camera, etc) and the computer vision techniques that locate these patterns. Furthermore with camera cell phones and PDAs becoming commonly available, a marker could be used to link a user to an URL address providing access to a series of images, advertisement etc. Another example of use includes a robot which could navigate by detecting markers placed in its environment.”, see also fig. 30, paragraph 209, detection of marker and overlay of web or generic graphic in augmented reality display).
In addition, Fiala discloses the concept of attaching fiduciary markers to real world physical object to aid augment reality device to determine location (Fiala, paragraphs 2, 3, 9, 18, 25-27, 28, conveniently mounted fiducial markers on object or locations to help identify points in environment and find correspondences, to improve visual quality of view).
Kaehler in view of Masters Lang and Wang discloses utilization of augment reality device in security check environment with real world object, Fiala further discloses the concept of applying fiduciary markers to objects and locations, such that portion of display in augmented reality may be altered by detected fiduciary marker (via overlay) for improved visual view and using fiduciary marker to determine locations in scene under view. It would have been obvious to one of ordinary skill in the art at the time of filing, to incorporate the concept of attaching fiduciary markers to real world physical object to aid augment reality device to determine provide additional content and to determine location, such as disclosed by Fiala, into the security check environment of augment reality system of Kaehler in view of Masters, Lang and Wang, to provide the benefit of more accurately determining location corresponding between augment reality device and real world object, and to provide the benefit of improved overlay content in displayed image, the result would have been predictable and would constitute (from claim 1) wherein at least a portion of the augmented-reality display is determined using one or more fiduciary markers; and (from claim 2) wherein one or more fiduciary markers are disposed at or adjacent to respective security checkpoints, the fiduciary markers configured to determine location information for an augmented-reality display, and would allow user to utilize augment reality system in security environment with improved visual effect.
Regarding claim 6, Kaehler in view of Masters, Lang, Wang, and Fiala discloses the augmented-reality device according to claim 1, further comprising instructions for retrieving real-time image data, and superimposing the security information of the real-time image data (Kaehler, fig. 12A, 12B, 13-15, detailed description in 102-172, in particular, fig. 12A, and 12B, acquire recognizing in real time person at security checkpoint and retrieve security information such as identification information of the person, displaying security information for the person such as person’s name by superimposing the mark adjacent to the person).
Regarding claim 7, this is a Beauregard claim (i.e., "non-transitory machine-readable medium") counterpart of device claim 1, both reciting substantially similar subject matter. Accordingly, claim 7 is rejected for the same reasons as claim 1.
Regarding claim 8, this is a Beauregard claim (i.e., "non-transitory machine-readable medium") counterpart of device claim 2, both reciting substantially similar subject matter. Accordingly, claim 8 is rejected for the same reasons as claim 2.
Regarding claim 12, this is a Beauregard claim (i.e., "non-transitory machine-readable medium") counterpart of device claim 6, both reciting substantially similar subject matter. Accordingly, claim 12 is rejected for the same reasons as claim 6.
Regarding claim 13, this is a method claim counterpart of device claim 1, both reciting substantially similar subject matter. Accordingly, claim 13 is rejected for the same reasons as claim 1.
Regarding claim 14, this is a method claim counterpart of device claim 2, both reciting substantially similar subject matter. Accordingly, claim 14 is rejected for the same reasons as claim 2.
Regarding claim 18, this is a method claim counterpart of device claim 6, both reciting substantially similar subject matter. Accordingly, claim 18 is rejected for the same reasons as claim 6.
Regarding claim 19, Kaehler in view of Masters, Lang, Wang, and Fiala discloses the method according to claim 13, wherein displayed information matches the user’s field of view (see Kaehler, fig. 12A, 12B, Master, fig. 8, displayed information within user’s field of view of real world environment).
Regarding claim 20, Kaehler in view of Masters, Lang, Wang and Fiala discloses the method according to claim 13, wherein the concurrently displaying is according to a user’s field of view (see Kaehler, fig. 12A, 12B, Master, fig. 8, displayed information within user’s field of view of real world environment).
Claims 3, 9, and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Kaehler in view of Masters, Lang, Wang, and Fiala, as applied in claims rejections of claims 2, 8, and 14 above, and in further view of Haddick et al, U.S. Patent Application Publication US 2011/0221656 A1 (hereinafter “Haddick”).
Regarding claim 3, Kaehler in view of Masters, Lang, Wang, and Fiala discloses the augmented-reality device according to claim 2. Kaehler in view of Masters, Lang, Wang, and Fiala does not disclose in particular wherein one or more hyper-local location sensors are used to determine the location information.
The concept of augmented reality device with hyperlocal sensor integrated to further refine location determination, however, is well known in the art, such as disclosed by Haddick, which discloses a wearable augmented reality device with integrated hyperlocal location sensor to enable hyperlocal augmented reality information to be displayed (paragraph 324).
Kaehler in view of Masters, Lang, Wang, and Fiala discloses utilization of augment reality device in security check environment, Haddick discloses augmented reality device with hyperlocal sensor integrated to further refine location determination. It would have been obvious to one of ordinary skill in the art at the time of filing, to incorporate the concept of integrated hyperlocal sensor, such as disclosed in Haddick, into the augmented reality device of Kaehler in view of Masters, Lang, Wang, and Fiala, to constitute wherein one or more hyper- local location sensors are used to determine the location information, for the benefit of improved location determination of augmented reality device while achieving the predictable result of allowing user to utilize augment reality system in security environment.
Regarding claim 9, this is a Beauregard claim (i.e., "non-transitory machine-readable medium") counterpart of device claim 3, both reciting substantially similar subject matter. Accordingly, claim 9 is rejected for the same reasons as claim 3.
Regarding claim 15, this is a method claim counterpart of device claim 3, both reciting substantially similar subject matter. Accordingly, claim 15 is rejected for the same reasons as claim 3.
Claims 4, 10, and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Kaehler in view of Masters, Lang, Wang, and Fiala, as applied in claims rejections of claims 1, 7, and 13 above, and in further view of US 20040039460 A1 (hereinafter “Kaputin).
Regarding claim 4, Kaehler in view of Masters, Lang, Wang, and Fiala discloses the augmented-reality device according to claim 1, wherein the augmented-reality device includes augmented-reality glasses (Kaehler, fig. 2, augmented reality device being a wearable glasses). The combination as made in rejection of claim 1 does not address in particular that the at least one of the plurality of system is an immigration kiosk.
Kaehler in view of Master, Lang, Wang, and Fiala discloses however that the augmented reality device may be used in a security checkpoint environment with access control system (see rejection of claim 1).
It is further noted a security checkpoint with access control system may be an immigration kiosk, as disclosed by Kaputin (paragraphs 28, 29, “automated kiosk intended for immigration control and passenger clearance).
It would have been obvious to implement the access control system of Kaehler in view Master, Lang, Wang, and Fiala as immigration kiosk such as disclosed in Kaputin, to constitute the at least one of the plurality of system is an immigration kiosk, the result would have been predictable and would allow user of augmented reality device at security environment to provide immigration clearance.
Regarding claim 10, this is a Beauregard claim (i.e., "non-transitory machine-readable medium") counterpart of device claim 4, both reciting substantially similar subject matter. Accordingly, claim 10 is rejected for the same reasons as claim 4.
Regarding claim 16, this is a method claim counterpart of device claim 4, both reciting substantially similar subject matter. Accordingly, claim 16 is rejected for the same reasons as claim 4.
Claims 5, 11, and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Kaehler in view of Masters, Lang, Wang, and Fiala, as applied in claims rejections above of claims 1, 7, and 13 above, and in further view of Aronsson et al., U.S. Patent Application Publication US 2012/0019557 A1 (hereinafter “Aronsson”).
Regarding claim 5, Kaehler in view of Masters, Lang, Wang, and Fiala discloses the augmented-reality device according to claim 1,
Kaehler in view of Masters, Lang, Wang, and Fiala does not disclose in particular wherein the augmented-reality device includes a smartphone or tablet.
It is well known, however, that an augmented reality device may be implemented in a smartphone or tablet, such as disclosed in Aronsson, wherein an augment reality device used for image recognition comprises a smart phone, tablet computer, or AR glasses (paragraphs 13, 69, fig. 1B).
It would have been obvious to one of ordinary skill in the art at the time of filing to incorporate the concept of implementing augmented reality device in a smartphone or a tablet, such as disclosed by Aronsson, into the augmented reality device of Kaehler in view of Masters, Lang, Wang, and Fiala, such that the augmented reality device of Kaehler in view of Masters, Lang, and Fiala comprises a smartphone or tablet, as regardless of whether the augmented reality device is implemented in AR glass, smartphone or tablet the predictable result of allowing user to utilize augment reality system in security environment would have been the same.
Regarding claim 11, this is a Beauregard claim (i.e., "non-transitory machine-readable medium") counterpart of device claim 5, both reciting substantially similar subject matter. Accordingly, claim 11 is rejected for the same reasons as claim 5.
Regarding claim 17, this is a method claim counterpart of device claim 5, both reciting substantially similar subject matter. Accordingly, claim 17 is rejected for the same reasons as claim 5.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PEIJIE SHEN whose telephone number is (571)272-5522. The examiner can normally be reached Monday - Friday 10AM - 6PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Patrick Edouard can be reached at 5712727603. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/PEIJIE SHEN/Examiner, Art Unit 2622
/PATRICK N EDOUARD/Supervisory Patent Examiner, Art Unit 2622