Detailed Action
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
On response filed on 10/30/2025, no claims were amended; no claims were added; no claims were cancelled. As a result, claims 1-20 are pending.
Response to Arguments
Applicant’s arguments provided on response filed on 10/30/2025 have been carefully and respectfully considered by the examiner but are not persuasive.
On page 4 of remarks filed on 10/30/2025, applicant argues that the privacy policy of Yildiz provided by the privacy policy server are defined by individual users and/or IT professionals directing functionality of an enterprise network. The examiner disagrees with applicant. In Yildiz, para. [0057], it states that “The privacy policy server in an embodiment may store a plurality of privacy policies, each of which may identity one or more digital content items that may be labeled or identified as sensitive, confidential, and/or private. These privacy policies may be created prior to the display of any sensitive, confidential, and/or private information, and may be maintained by individual users, and/or by IT professionals directing functionality of an enterprise network.” There is teaching by Yildiz that the policies are generated/created by the users and/or the enterprise IT professionals. There only mention of the users or the IT professionals maintaining them at the server accessible to the host. The examiner maintains that a trusted third party, the application users and the application developers come together in implementing the policies and thus the policies qualify as the information provided by the application or by a developer of the application.
Relevant Prior Art: Lebeck et al. (“Securing Augmented Reality Output”, 2017) teaches in pages 331-333 an output policy module where the AR application developers write the output policies and evaluate the output policies.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-7, 10-13, 15, 19-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yu et al. (US 2021/0165917 A1) hereinafter Yu, in view of Chen et al. (US 2022/0374543 A1) hereinafter Chen, and further in view of Yildiz et al. (US 2019/0340815 A1) hereinafter Yildiz.
As to claim 1 Yu teaches a computer-implemented method comprising: receiving, by a software component (see para. [0032], e.g., a process that handles an application's (app) request to access and use a digital camera) on a user device (e.g. information handling system in claim 8 “8. An information handling system comprising: one or more processors; a memory coupled to at least one of the processors; a digital camera accessible by at least one of the processors; a network interface that connects the information handling system to a computer network; and a set of computer program instructions stored in the memory and executed by at least one of the processors in order to perform actions comprising: detecting that a digital camera has been set to a privacy mode that limits access to the digital camera; receiving a request to access the digital camera from an application; determining whether the requesting application is allowed access to the digital camera while the digital camera is in the privacy mode; allowing access by the requesting application to the digital camera in response to the determination being that the requesting application is allowed access to the digital camera; and inhibiting access by the requesting application to the digital camera in response to the determination being that the requesting application is disallowed access to the digital camera.”) from an application executing on the user device, a request to access a sensor of the user device (see para. [0008] for an application executing outside of the intermediary layer; see also para. [0027] for applications 162; e.g., passively receiving field of view stream of images);
determining, by the software component, that the application is permitted to receive particular data derived from the sensor (see para. [0036] “The process determines as to whether the requesting application is included in list of applications that can access and use the digital camera while the device is in the Privacy mode (decision 720). If the requesting application is included in list of applications that can access and use the digital camera while the device is in the Privacy mode, then decision 720 branches to the ‘yes’ branch for further processing. On the other hand, if the requesting application is not included in the list (indicating that the requesting application is not allowed access to the digital camera while in Privacy mode), then decision 720 branches to the ‘no’ branch and processing ends at 725 with the process disregarding the application's request to access the digital camera.”).
Yu does not explicitly teach but Chen teaches “responsive to determining that the application is permitted to receive particular data derived from the sensor: obtaining, by the software component, sensor data output by the sensor; and providing, by the software component, the particular data to the application” (see para.[0064], and para. [0084]-[0086] “[0084] At operation 820, a display permission for one or more of the identified objects in the obtained image data is determined. According to some embodiments, determining the display permission (for example, as described with reference to blocks 425 and 435 of FIG. 4) comprises a two part determination of whether the detected object is on a deny list, and a second determination of a privacy policy/display rule applicable to objects on the deny list. [0085] According to various embodiments, at operation 825, where the display permission associated with one or more detected objects in the image data specifies not passing the image data beyond the intermediary layer, a pixel level modification of image data comprising, at a minimum, pixels within the determined contour coordinates is performed. According to some embodiments, the modification of the data comprises replacing the pixels within the contour coordinates with a field of a chosen color. In certain embodiments, to reduce the likelihood of the privacy-sensitive object being inferred from the shape of the modified image data, pixels beyond those within the contour coordinates may be modified, or a more computationally-intensive modification technique (for example, guided blending) may be performed to diminish the likelihood of privacy-sensitive information being recovered. [0086] At operation 830, the modified image data is passed to one or more AR applications executing outside of the intermediary layer. In certain embodiments, the modified image data may also be cached locally within the intermediary layer (for example, in frame cache 440 in FIG. 4).”)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of Yu and Chen before him or her, to modify the scheme of Yu by including Chen’s removing privacy sensitive objects in an augmented reality system. The suggestion/motivation for doing so would have been to provide effective controls for restricting the dissemination of ambient privacy-sensitive data captured through the normal operation of cameras and video sensors supporting AR displays, as briefly discussed in para. [0003]-[0008].
The combination of Yu and Chen does not explicitly teach but Yildiz teaches “determining, by the software component, the particular data based on the sensor data and information provided by the application or by a developer of the application, the information identifying the particular data” (see para. [0057]-[0059] “
[0057] At block 402, the host information handling system in an embodiment may receive an identification of sensitive, confidential, and/or private information from a privacy policy server. In an embodiment, the privacy policy server may be incorporated within the host information handling system, or may be located remotely from the host information handling system and in communication with the host information handling system via a wired or wireless network. The privacy policy server in an embodiment may store a plurality of privacy policies, each of which may identify one or more digital content items that may be labeled or identified as sensitive, confidential, and/or private. These privacy policies may be created prior to the display of any sensitive, confidential, and/or private information, and may be maintained by individual users, and/or by IT professionals directing functionality of an enterprise network. Each privacy policy may be associated with a single authorized user, groups of authorized users, or all authorized users within a given network. Further, the digital content items identified within each privacy policy as sensitive information may identify as sensitive information specific files or data records, all files or data records of a given type (e.g. Microsoft® Excel), and/or subparts of digital content within a single data record or file. For example, in an embodiment described with reference to FIG. 2A, the text 204 may not be identified as sensitive, while the still image 206 may be identified as sensitive.
[0058] In an embodiment at block 404, the host information handling system may receive a user request or instruction to display content, including the sensitive, confidential, and/or private information identified at block 402. For example, in an embodiment described with reference to FIG. 2A, the host information handling system 218 may receive a user request or instruction to display digital content file 202, including an image 206 and text 204. As described directly above with respect to block 402, the host information handling system 218 may have received a privacy policy identifying the digital content file 202, the image 206, and/or the text 204 as sensitive, private, or confidential information.
[0059] At block 406, the host information handling system in an embodiment may deduct the identified sensitive, confidential, and/or private information from the remainder of the digital content the user requested be displayed, and display the digital content without the sensitive, confidential, or private information. For example, in an embodiment described with reference to FIG. 2B, the augmented reality assisted display security system operating at least partially within the host information handling system 218 may display the digital content 202, but may deduct or redact the sensitive text 204 and sensitive still image 206 from the display. In other embodiments, the augmented reality assisted display security system may display the digital content 202, but may deduct or redact only one of the text 204, or the still image 206 from the display, based upon identification of only one of these content items as sensitive information in a privacy policy. In yet another embodiment, the augmented reality assisted display security system may disallow display of the entire digital content file 202, based upon identification of the entire digital file 202 as sensitive information in a privacy policy. Deduction of secure content in an embodiment may include, but may not be limited to removing the content from the display via the host display, and covering or obfuscating the display of the content.”; see also para. [0037] for watermark based identification of sensitive content part).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of Yu, Chen and Yildiz before him or her, to modify the scheme of Yu and Chen by including Yildiz. The suggestion/motivation for doing so would have been to make sensitive data viewable by only the authorized user and wearer of the head mounted display device, without distorting the images displayed by the host video display without exposing the sensitive data displayed on a mobile device screen, for example, using the policy on how to identify the sensitive data provided by the secure viewing application developer, as briefly discussed in Yildiz, para. [0016] and [0017].
Claims 19-20 include similar limitations as claim 1 and thus claims 19-20 is/are rejected under the same rationale as in claim 1.
As to claim 2, in view of claim 1, Chen and Yildiz teaches wherein the particular data derived from the sensor is different than the sensor data output by the sensor (see Chen para. [0008] “modifying the image data in the region comprising the one or more identified objects according to the display permission and outputting the modified image data to an application executing outside of the intermediary layer.”; see Yildiz para. [0017]).
As to claim 3, in view of claim 1, Chen teaches wherein determining the particular data comprises filtering or transforming the sensor data to thereby provide the particular data derived from the sensor (see para. [0079] “wherein image data of certain privacy sensitive objects is modified, obscured, or otherwise prevented from passing beyond the intermediary layer to processes operating within a “normal” operating system world executing on the device. The image data of the whiteboard, the computer monitor, and the calendar is replaced with blank regions 751a-751c. According to certain embodiments, the contours of blank regions 751a-751c may be softened or expanded to avoid closely following the silhouettes of the privacy-sensitive objects. In this way, the likelihood of the privacy-sensitive objects' identities being inferred from regions of modified data is reduced.”).
As to claim 4, in view of claim 1, Yildiz teaches wherein the information provided by the application or by the developer of the application that identifies the particular data comprises: (a) information provided by the application that defines the particular data (see para. [0049] “The head mounted display CPU 314 in an embodiment may operate to receive the sensitive (e.g. private/confidential) information from the host information handling system 218. In some embodiments, the head mounted display CPU 314 may also receive from the host information handling system 218 the three-dimensional image captured by the host three-dimensional camera 312, and/or an identification of the location of the one or more placeholders with respect to either the watermark or the physical boundaries of the host video display 110.”), (b) information provided by a developer of the application that defines the particular data, (c) information provided by the application that defines a manner in which the sensor data is to be filtered or transformed to provide the particular data; or (d) information provided by the developer of the application that defines a manner in which the sensor data is to be filtered or transformed to provide the particular data.
As to claim 5, in view of claim 1, Chen teaches wherein determining the particular data comprises determining the particular data based on the sensor data, the information provided by the application or by the developer of the application that identifies the particular data, and known information about the application or function of the application (see para. [064] “denying all applications from accessing all image data of the operating environment (a setting appropriate for certain locations, such as government buildings or banks), filtering all objects on a deny list, allowing AR applications to access all image data, and conditionally allowing objects to be displayed (for example, a geofencing constraint which selectively allows or disallows objects to be shown depending on the location).”; It is noted that the modifying of image is based on the context of the image and the identity of application.)
As to claim 6, in view of claim 1, Chen teaches wherein the software component (e.g., intermediary layer) is an Operating System (OS) or OS component of the user device or has a trusted relationship therewith (see para. [0045] “…certain embodiments according the present disclosure implement an intermediary layer at the operating system (OS) level to remove or modify privacy-sensitive information captured during from cameras supporting AR applications. FIG. 3A and 3B illustrate two, non-limiting examples of architectures according to the present disclosure in which an OS-level intermediary layer is implemented.”)
As to claim 7, in view of claim 6, Chen teaches wherein the application does not have a trusted relationship with the OS or OS component of the user device (see para. [0079]; It is noted that any applications placed outside OS is not trusted enough to have a direct access to the unmodified images including privacy-sensitive objects.).
As to claim 10, in view of claim 1, Chen teaches wherein determining that the application is permitted to receive the particular data derived from the sensor comprises determining that the application is permitted to receive the particular data derived from the sensor based on one or more user-defined or Operating System (OS) -defined permissions for the application (see para. [0025] “…the process stores the privacy mode configuration of the selected app (e.g., app identifier, security actions, etc.) in data store 460.”).
As to claim 11, in view of claim 1, Yu teaches wherein the sensor comprises at least one of a camera, a microphone, an accelerometer, or a gyroscope (see para. [0026]).
As to claim 12, in view of claim 1, Chen teaches wherein the software component (e.g., intermediate layer) supports requests for one or more types of particular data for one or more types of sensors (see para. [0009] “…modify the image data in the region comprising the one or more identified objects according to the display permission, and output the modified image data to an application executing outside of the intermediary layer.”).
As to claim 13, in view of claim 1, Chen teaches wherein the user device comprises a plurality of software components each supporting requests for different types of particular data derived from the sensor (see para. [0047] for first intermediary layer and para. [0048] for second intermediary layer), particular data for one or more different types of sensors, or both.
As to claim 15, in view of claim 1, Chen teaches wherein the user device is or includes at least one of a smartphone, a tablet computer, a personal computer, or an Augmented Reality (AR) / Virtual Reality (VR) device (see para. [0038] and [0078], e.g., HMD or AR).
Claim(s) 8-9, 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yu, in view of Chen, in view of Yildiz and further in view of Swingler et al. (US 2012/0311697 A1) hereinafter Swingler.
As to claim 8, in view of claim 1, the combination of Yu, Chen and Yildiz does not explicitly teach but Swingler teaches “wherein the application is a web browser, the request is associated to a particular website or webpage, and determining that the application is permitted to receive the particular data derived from the sensor comprises determining that the particular website or webpage is permitted to receive the particular data derived from the sensor (see Swingler para. [0022]-[0025] “[0020]… Application 101 can be any kind of application such as a standalone application. Alternatively, application 101 may be a plug-in application or applet that is hosted within another application. For example, application 101 may be a Java.TM. applet embedded within a Web page hosted or processed by a browser application, where the Web page may be downloaded from a variety of information or service provider servers such as Web servers. In this example, a Java applet communicates with the browser application via a corresponding agent or plug-in (e.g., Java plug-in), where the browser application communicates with security framework 103 via a system API (e.g., API 102). Some applets may include photo uploaders and picture takers, interactive maps that show a user's real-time location, or collaborative document editors. [0021] In one embodiment, application 101 includes information describing one or more permissions requested and/or required by application 101, referred to herein as application-level permissions 107. Application-level permissions 107 refer to the permissions requested by application 101 for accessing one or more resources of system 100 during execution of application 101. Application-level permissions 107 may be specified by a developer or administrator of application 101. Application-level permissions 107 are typically specified in a format that is compatible with the API 102 in a programming language of application 101. Application-level permissions 107 may or may not be described in a human understandable manner. [0022] In one embodiment, application-level permissions 107 include a first portion of one or more permissions that are required by application 101 and a second portion of one or more permissions that are optionally required by application 101 during execution of application 101. That is, the required permissions represent the permissions an application needs in order to perform its basic functions. An example of such a required permission for an application such as a photo editor application includes a permission to access files in the user's home directory on the file system. The optional permissions represent those permissions the application would benefit from having, but whose absence will still allow the application to perform its basic functions. An example of such an optional permission is a permission to print a picture. As another example, an instant messaging application would require access to connect over a network to an instant messaging server, but would optionally request access to the system's microphone and camera.”)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of Yu, Chen, Yildiz and Swingler before him or her, to modify the scheme of Yu, Chen and Yildiz by including Swingler’s restricting Web Page application from accessing one or more resources based on user-level permissions. The suggestion/motivation for doing so would have been to provide, to Web pages, effective controls for restricting the dissemination of ambient privacy-sensitive data captured through the normal operation of cameras and video sensors supporting AR displays.
As to claim 9, in view of claim 1, Swingler teaches wherein the software component is a web browser (see para. [0020] “…application 101 may be a plug-in application or applet that is hosted within another application. For example, application 101 may be a Java.TM. applet embedded within a Web page hosted or processed by a browser application, where the Web page may be downloaded from a variety of information or service provider servers such as Web servers. In this example, a Java applet communicates with the browser application via a corresponding agent or plug-in (e.g., Java plug-in), where the browser application communicates with security framework 103 via a system API (e.g., API 102). Some applets may include photo uploaders and picture takers, interactive maps that show a user's real-time location, or collaborative document editors.”), the application is a web application (e.g. plug-in application or applet), and determining that the application is permitted to receive the particular data derived from the sensor comprises determining that the web application is permitted to receive the particular data derived from the sensor (see para. [0038] “At block 702, it is determined whether the requested permissions have been previously granted during a previous execution of the application. Such a determination may be performed by comparing the requested permissions with permissions listed in a security profile associated with the application. The security profile may have been created and stored in a persistent storage during a previous execution of the application.”).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of Yu, Chen, Yildiz and Swingler before him or her, to modify the scheme of Yu, Chen and Yildiz by including Swingler’s restricting Web Page application from accessing one or more resources based on user-level permissions. The suggestion/motivation for doing so would have been to provide, to Web pages, effective controls for restricting the dissemination of ambient privacy-sensitive data captured through the normal operation of cameras and video sensors supporting AR displays.
As to claim 14, in view of claim 1, the combination of Yu, Chen and Yildiz does not explicitly teach but Swingler teaches further comprising: presenting, by the software component via an output component of the user device, a prompt that requests user confirmation that sharing of the particular data with the application is allowed; and receiving user input responsive to the prompt; wherein providing the particular data to the application comprises providing the particular data to the application only if the user input indicates that sharing of the particular data with the application is allowed (see para. [0025] and [0035] “[0035] If the currently requested permissions are different than the previously requested permissions, according to one embodiment, a GUI page such as the one as shown in FIG. 4 may be displayed requesting a user to confirm the authorization of the new permissions requested. In addition, according to one embodiment, if certain optional permissions were previously requested, the GUI page may still be displayed to confirm whether the user wishes to grant the optional permissions this time around. In some situations, a user may have granted an optional permission during a previous execution of the application, but the user may not want to grant the same optional permission in a subsequent execution of the application. The new settings may be updated in the security profile dependent upon whether the user indicates that the new settings should be saved in a persistent security profile. Furthermore, the security profile may be removed or erased from the persistent storage if the user denies all the permissions requested.”).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of Yu, Chen, Yildiz and Swingler before him or her, to modify the scheme of Yu, Chen by and Yildiz by including Swingler’s restricting Web Page application from accessing one or more resources based on user-level permissions. The suggestion/motivation for doing so would have been to provide, to Web pages, effective controls for restricting the dissemination of ambient privacy-sensitive data captured through the normal operation of cameras and video sensors supporting AR displays.
Claim(s) 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yu, in view of Chen, in view of Yildiz and further in view of Cordiner et al. (US 2020/0366670 A1) hereinafter Cordiner.
As to claim 16, in view of claim 1, the combination of Yu and Chen teaches wherein the user device is or includes an Augmented Reality (AR) / Virtual Reality (VR) device (see Chen, para. [0038] and [0078], e.g., HMD or AR), the sensor is a camera comprised in the AR/VR device (see Yu, para. [0026]), and: obtaining the sensor data from the sensor comprises obtaining a picture or a live video stream from the camera (see Chen para. [0064]).
The combination of Yu, Chen and Yildiz does not explicitly teach but Cordiner teaches “determining the particular data based on the sensor data comprises extracting a user authentication token from the obtained picture or live video stream; and providing the particular data to the application comprises providing the user authentication token to the application” (see para. [0060] – [0062] “This may include performing image processing on the image data to identify and extract image data associated with the physical object and to compare the image data associated with the physical object with expected image data stored in association with the record. In some implementations, the physical object may be associated with a token which uniquely identifies the physical environment and/or the safe door and analysing the image data for the presence of the object may include extracting and validating the token.”).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of Yu, Chen, Yildiz and Cordiner before him or her, to modify the scheme of Yu, Chen and Yildiz by including Cordiner’s restricting Web Page application from accessing one or more resources based on user-level permissions. The suggestion/motivation for doing so would have been to enhance multi-factor authentication safety by further including augmented reality object based authentication to the multi-factor authentication, as briefly mentioned in Cordiner, para. [0002]-[0009].
Allowable Subject Matter
Claims 17-18 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter: As to claim 17, even though Ortiz et al (US 11,539,525 B2) teaches using AR camera captured user token to authenticate the user during e-commerce transaction, prior arts or record and further search does not explicitly teach limitation “at the application on the AR/VR device: receiving the user authentication token from the software component responsive to the request; and providing the user authentication token to the e-commerce system.”
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HEE SONG whose telephone number is (571)270-3260. The examiner can normally be reached on Mon – Fri, 7:30 AM – 5:00 PM.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Eleni Shiferaw can be reached on (571)272-3867. The fax phone number for the organization where this application or proceeding is assigned is 571-273-7291.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/HEE K SONG/Primary Examiner, Art Unit 2497