DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendments
This office action responds to the amendments filed on September 16, 2025 for application 17/570,140. Claims and 1 and 9 are amended, claim 21 is added as a new claim, and claims 9-21 remain pending in the application.
Response to Arguments
The Examiner has fully considered the Applicant’s arguments filed on September 16, 2025, and the Examiner responds as provided below.
Regarding the Applicant’s response at pages 1-3 of the Remarks that concerns the § 103 rejection, the Applicant’s arguments in conjunction with the claim amendments are persuasive, and consequently the Examiner conducted a new prior art search. The Applicant’s arguments are now moot with respect to the pending claims because the arguments do not apply to one of the references currently used in the rejection of the claims as detailed below.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The following conventions apply to the mapping of the prior art to the claims:
Italicized text – claim language.
Parenthetical plain text – Examiner’s citation and explanation.
Citation without an explanation – an explanation has been previously provided for the respective limitation(s).
Quotation marks – language quoted from a prior art reference.
Underlining – language quoted from a claim.
Brackets – material altered from either a prior art reference or a claim, which includes the Examiner’s explanation that relates a claim limitation to the quoted material of a reference.
Braces – a limitation taught by another reference, but the limitation is presented with the mapping of the instant reference for context.
Numbered superscript – a first phrase to be moved upwards to the primary reference analysis.
Lettered superscript – a second phrase to be moved after the movement of the first phrase from which it was lifted, or more succinctly, move numbered material first, lettered material last.
A. Claims 9, 11-13 and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Uprit (US 2023/0119556, “Uprit”) in view of Gehler et al. (US 2022/0107774, “Gehler”), and further in view of Khan et al. (US 2022/0326967, “Khan”).
Regarding Claim 9
Uprit discloses
A method (Fig. 3, abstract) comprising:
receiving, by a first computing device and from a server (Fig. 5, ¶¶ [0064]-[0065], “User 501 may be connected (503) to the system, wired or wirelessly, via an AR-enabled [first computing] device such as a pair of smart glasses or goggles.”; and “The system may, in some embodiments, be connected to, or through, cloud 519 [possessing a server].”; and ¶ [0028], “When the image includes an object from the list of objects, the central server may be configured to mask the object in the image.”):
at least one portion of content (¶ [0064], “User 501 may capture an image [one portion of content] of an object that may display sensitive information, such as computing device 505 (e.g., smart phone, tablet, laptop, desktop, etc.),...”; and “If the image is determined to include NPI, the system [possessing the server] may display [from the server] an edited, or masked, image 523 to [received by] some or all other users [possessing the AR device/first computing device].”);
1 …; and
2 …;
receiving, via a camera associated with the first computing device, a video footage (¶ [0025], “The central server may be configured to receive, from a first user device, an image that was scanned in via the camera of the first user [computing] device. The image may be a single stationary image. The image may be a part of a moving image, for example, a still frame from a video [footage] clip.”) that depicts an…3 version of the at least one portion of the content being displayed on the display device associated with the second computing device (Fig. 5, ¶ [0064], “ User 501 may capture an image [one portion of the content] of an object that may display sensitive information, such as [second] computing device 505 (e.g., smart phone, tablet, laptop, desktop, etc.), payment instrument 507 (e.g., a credit card), or document 509 (e.g., a contract or agreement).”);
4 …; and
5 ….
Uprit doesn’t disclose
1 one or more locations of the at least one portion of the content, on a screen of a display device associated with a second computing device;
2 information about the display device;
3 … obfuscated…
4 modifying, by the first display device, the video footage by overlaying, based on the information about the display device and further based on the one or more locations, a non-obfuscated version of the at least one portion of the content over the obfuscated version of the at least one portion of the content as depicted in the video footage;
5 displaying, by the first computing device, the modified video footage.
Gehler, however, discloses
1 one or more locations of the at least one portion of the content, on a screen of a display device associated with a second computing device (Fig. 4, ¶ [0068], “As a result, when the user wears the HMD [head-mounted device] 104 and views the display screen 108 of the primary user [second computing] device 102 through the HMD [first computing device] 104, the unredacted portions of the document appear to overlay [suggesting the use of one or more locations of the at least one portion of the content] the corresponding redacted portions of the filtered labeled content 220 displayed via the display screen 108 of the primary user device 102.”, i.e., Gehler does not literally teach the use of “locations” to place features on a screen of a display, but it would be obvious to one skilled in the art to collect such information and to apply it to a displayable object to ensure that it may be properly “overlaid” on the screen to fulfill the intended purpose of obscuring sensitive information. See MPEP § 2141(III), stating “Prior art is not limited just to the references being applied, but includes the understanding of one of ordinary skill in the art. The prior art reference (or references when combined) need not teach or suggest all the claim limitations, however, Office personnel must explain why the difference(s) between the prior art and the claimed invention would have been obvious to one of ordinary skill in the art.” See also Kahn ¶¶ [0025]-[0030]);
3 … obfuscated… (Fig. 2, ¶ [0056], “In the example of FIG. 2, if the content analysis circuitry 202 detects sensitive information identifier(s) 212 in portion(s) (e.g., a video frame, a page of a multi-age document) of the labeled content 136, the content modification circuitry 204 filters the portion(s) of the labeled content 136 associated with the sensitive information identifier(s) 212 to generate example filtered labeled content 220 for presentation via the primary user device 102. As a result of the filtering, one or more viewing properties of the sensitive information is adjusted such that the sensitive information is not. In some examples, the content modification circuitry 204 causes the content associated with the sensitive information identifier(s) 212 to be hidden, removed, modified [obfuscated] so as to become unreadable, etc.”)
4 modifying, by the first display device, the video footage by overlaying, based on the information about the display device and further based on the one or more locations, a non-obfuscated version of the at least one portion of the content over the obfuscated version of the at least one portion of the content as depicted in the video footage (¶¶ [0067]-[0070], “For example, when the labeled content 136 includes a document, the secondary display management circuitry 208 can transmit the unredacted portions of the document for presentation at the HMD [first display device] 104 as augmented reality content. As a result, when the user wears the HMD 104 and views the display screen 108 of the primary user device 102 through the HMD 104, the unredacted portions of the document [as video footage] appear to overlay the corresponding redacted portions of the filtered labeled content 220 displayed via the display screen 108 of the primary user device 102. In such instances, the secondary display management circuitry 208 transmits the portions of the document that are flagged with the sensitive information indicator(s) 214 for presentation. When the user of the HMD 104 views the content via the HMD 104, the sensitive information appears [via modifying the video footage] to be aligned with, overlaying, or replacing the redacted portions displayed via the primary user device 102. Thus, when the user wears the HMD 104, the filtered labeled content 220 displayed at the primary user device 102 is augmented with the sensitive information [non-obfuscated version of the portion of the content] that is not otherwise visible via the primary user device 102.”; and ¶ [0059], “For instance, the content filtering rule(s) 222 can define that when the metadata of a video [footage] frame includes a tag indicative of sensitive information, the content modification circuitry 204 should cause a blur filter or a scramble filter to be applied to the video frame.”);
5 displaying, by the first computing device, the modified video footage (Fig. 4, ¶ [0068], “Thus, when the user wears the HMD [first computing device] 104, the filtered labeled content 220 displayed at the primary user device 102 is augmented [to create the modified video footage] with the sensitive information that is not otherwise visible via the primary user device 102.”).
Regarding the combination of Uprit and Gehler, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the AR security system of Uprit to arrive at the claimed invention. KSR establishes that a rationale for obviousness is proven by showing a “use of [a] known technique to improve similar devices in the same way.” See MPEP § 2143(I)(C).
To substantiate the conclusion of obviousness under this KSR rationale, the Examiner finds pursuant to MPEP § 2143(I)(C):
1) the prior art contained a base system, namely the AR security system of Uprit, upon which the claimed invention can be seen as an “improvement” through the use of a video obfuscation feature;
2) the prior art contained a “comparable” system, namely the AR system of Gehler, that has been improved in the same way as the claimed invention through the video obfuscation feature; and
3) one of ordinary skill in the art could have applied the known improvement technique of applying the video obfuscation feature to the base AR security system of Uprit, and the results would have been predictable to one of ordinary skill in the art.
Khan, however, discloses
2 information about the display device (¶¶ [0025]-[0030], “In some examples, displaying the view of the AR environment on the AR display device [first computing device] comprises: obtaining 2D display device display surface area information indicating [about] a display surface area of the 2D display device…”);
Regarding the combination of Uprit-Gehler and Khan, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the AR security system of Uprit-Gehler to arrive at the claimed invention. KSR establishes that a rationale for obviousness is proven by showing a “use of [a] known technique to improve similar devices in the same way.” See MPEP § 2143(I)(C).
To substantiate the conclusion of obviousness under this KSR rationale, the Examiner finds pursuant to MPEP § 2143(I)(C):
1) the prior art contained a base system, namely the AR security system of Uprit-Gehler, upon which the claimed invention can be seen as an “improvement” through the use of a display sizing feature;
2) the prior art contained a “comparable” system, namely the AR system of Khan, that has been improved in the same way as the claimed invention through the display sizing feature; and
3) one of ordinary skill in the art could have applied the known improvement technique of applying the display sizing feature to the base AR security system of Uprit-Gehler, and the results would have been predictable to one of ordinary skill in the art.
Regarding Claim 11
Uprit in view of Gehler, and further in view of Kahn (“Uprit-Gehler-Khan”) discloses the method of claim 9, and Uprit further discloses
wherein the first computing device comprises at least one of: a wearable computing device, a head-mounted display device, a virtual reality display device, an augmented reality display device, or a mixed reality display device (¶ [0022], “Each user device from the plurality of user devices may include a camera, a screen configured to provide a device user with an AR experience, and a communication interface. User devices may, for example, include smart glasses [wearable computing device], goggles, displays (such as a “heads-up display”), or any other suitable device capable of providing a user with an AR experience.”).
Regarding Claim 12
Uprit-Gehler-Khan discloses the method of claim 9, and Gehler further discloses
wherein the obfuscated version of the at least one portion of the content is obfuscated by at least one of: pixelating the at least one portion, blurring the at least one portion, blocking the at least one portion, or removing the at least one portion ().
Regarding the combination of Uprit and Gehler, the rationale to combine is the same as provided for claim 9 due to the overlapping subject matter of claims 9 and 12.
Regarding Claim 13
Uprit-Gehler-Khan discloses the method of claim 9, and Khan further discloses
wherein the information about the display device indicates at least one of a screen size or a screen resolution (¶ [0029], “…processing the 2D display device display surface area information [about the display device] to generate secondary DUI [] screen virtual size information, and displaying the view of an AR environment on the AR display device such that the secondary DUI screen is displayed with a virtual size indicated by the secondary DUI screen virtual size information.”).
Regarding the combination of Uprit-Gehler and Khan, the rationale to combine is the same as provided for claim 9 due to the overlapping subject matter of claims 9 and 13.
Regarding Claim 21
Uprit-Gehler-Khan discloses the method of claim 9, and Gehler further discloses
wherein the video footage (¶ [0059], “For instance, the content filtering rule(s) 222 can define that when the metadata of a video [footage] frame includes a tag indicative of sensitive information, the content modification circuitry 204 should cause a blur filter or a scramble filter to be applied to the video frame.”) is captured by the camera while the video footage is depicted on the display device associated with the second computing device (¶ [0067], “ In examples in which the HMD 104 is an augmented reality device, at least a portion of the real-world environment including the primary user device 102 can be visible [and captured by the camera] to the user while the user is wearing the HMD [and the video footage depicted on the display device associated with the second computing device] 104. In such examples, the secondary display management circuitry 208 can cause augmented reality content corresponding to the sensitive information to be presented via the HMD 104 such that the sensitive information appears to overlay or augment the filtered (e.g., redacted, censored) portions of the filtered labeled content 220 presented via the primary user device 102.”).
Regarding the combination of Uprit and Gehler, the rationale to combine is the same as provided for claim 9 due to the overlapping subject matter of claims 9 and 21.
B. Claims 10 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Uprit in view of Gehler and Khan, and further in view of Carter (US 10,528,838, “Carter”) and Syed et al. (US 2021/0375049, “Syed”).
Regarding Claim 10
Uprit-Gehler-Khan discloses the method of claim 9, and Khan further discloses
wherein the {modifying the video footage (Carter Figs. 3a-b, Col. 7:8-17)} comprises:
determining, based on the information about the display device associated with the second computing device (¶ [0029]), a location,…1 associated with the first computing device, of an anchor for the display device associated with the second computing device (¶ [0094], “At 502, the display submodule 330 of the computing device 200 obtains 2D display device location information indicating a location of the 2D display device 106. In order to [determine] anchor the virtual DUI [distributed user interface] screens (i.e. the secondary DUI screen and any additional DUI screens) in virtual space around the 2D display device [associated with the second computing device] 106, the HMD unit 116 may detect the device in its field of view using a camera of the HMD unit 116 and/or the HMD IMU 112.”; and ¶ [0110], “In particular, the virtual DUI screens displayed by the AR display [first computing] device 104 remain anchored to the physical display [device] (i.e. 2D display device 106) within the AR environment.”); and
determining, based on the location of the anchor and the one or more locations of the at least one portion of the content, a location, within the video footage, of the non-obfuscated version of the at least one portion of the content (¶ [0110], “The virtual DUI screens [possessing the video footage] are each assigned [determined] a [based on] location anchored to the 2D display device [having a location and possessing the portion of the content] 106.”).
Regarding the combination of Uprit and Khan, the rationale to combine is the same as provided for claim 9 due to the overlapping subject matter of claims 9 and 10.
Regarding the combination of Uprit-Khan-Edwards and Carter, the rationale to combine is the same as provided for claim 9 due to the overlapping subject matter of claims 9 and 10.
Uprit-Gehler-Khan doesn’t disclose
1 …in a coordinate space…
Syed, however, discloses
1 …in a coordinate space… (¶ [0036], “Spatial data associated with the one or more virtual objects may be determined. The spatial data associated with the one or more virtual objects may comprise data associated with the position in 3D space (e.g., x, y, z coordinates).”)
Regarding the combination of Uprit-Gehler-Khan and Syed, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the AR security system of Uprit-Gehler-Khan to arrive at the claimed invention. KSR establishes that a rationale for obviousness is proven by showing a “use of [a] known technique to improve similar devices in the same way.” See MPEP § 2143(I)(C).
To substantiate the conclusion of obviousness under this KSR rationale, the Examiner finds pursuant to MPEP § 2143(I)(C):
1) the prior art contained a base system, namely the AR security system of Uprit-Gehler-Khan, upon which the claimed invention can be seen as an “improvement” through the use of a coordinate feature;
2) the prior art contained a “comparable” system, namely the AR system of Syed, that has been improved in the same way as the claimed invention through the coordinate feature; and
3) one of ordinary skill in the art could have applied the known improvement technique of applying the coordinate feature to the base AR security system of Uprit-Gehler-Khan, and the results would have been predictable to one of ordinary skill in the art.
Regarding Claim 14
Uprit-Gehler-Khan discloses the method of claim 9, and Khan further discloses
wherein the modifying the video footage (Carter Figs. 3a-b, Col. 7:8-17) comprises performing at least one of:
1 ….
Syed further discloses
1 scaling a size of the non-obfuscated version of the at least one portion of the content; changing an orientation of the non-obfuscated version of the at least one portion of the content; or shifting a position of the non-obfuscated version of the at least one portion of the content (¶ [0038], “For example, if the virtual object [non-obfuscated version of the content displayed on the second computing device] is not moving within the augmented reality scene (e.g., the virtual animal remains at rest on the table), the position of the virtual object in the augmented reality scene may be adjusted to maintain appropriate position, scale, and/or orientation. In another example, if the virtual object is moving within the augmented reality scene (e.g., the virtual animal jumps off the table), the position of the virtual object in the augmented reality scene may be adjusted to maintain appropriate position, scale, and/or orientation.”).
Regarding the combination of Uprit-Gehler-Khan and Syed, the rationale to combine is the same as provided for claim 9 due to the overlapping subject matter of claims 10 and 14.
C. Claims 15-17 are rejected under 35 U.S.C. 103 as being unpatentable over Uprit in view of Gehler.
Regarding Independent Claim 15
Uprit discloses
A system (Fig. 5, abstract), comprising:
a server (Fig. 5, ¶¶ [0064]-[0065], “The system may, in some embodiments, be connected to, or through, cloud 519 [possessing a server].”; and ¶ [0028], “When the image includes an object from the list of objects, the central server may be configured to mask the object in the image.”);
a first computing device comprising a first display device (Fig. 5, ¶¶ [0064]-[0065], “User 501 may capture an image of an object that may display [via a first display device] sensitive information, such as computing [first computing] device 505 (e.g., smart phone, tablet, laptop, desktop, etc.),...”); and
a second computing device (Fig. 5, ¶¶ [0064]-[0065], “User 501 may be connected (503) to the system, wired or wirelessly, via an AR-enabled [second computing] device such as a pair of smart glasses or goggles [possessing a first display].”) comprising:
a camera (¶ [0025], “The central server may be configured to receive, from a first user [second computing] device, an image that was scanned in via the camera of the first user.”); and
a second display device (¶ [0022], “Each user device from the plurality of user devices may include a camera, a screen [second display] configured to provide a device user with an AR experience, and a communication interface.”),
wherein the first computing device (Fig. 5, ¶¶ [0064]-[0065]) is configured to:
send, to the server, a request for content, wherein the content comprises at least one portion that is marked as sensitive content (¶¶ [0064]-[0065], “User 501 may capture an image of an object that may display sensitive information [previously obtained through a request sent to the server, i.e., sensitive information is not sent indiscriminately unless requested], such as [first] computing device 505 (e.g., smart phone, tablet, laptop, desktop, etc.),…”; and “The system may, in some embodiments, be connected to, or through, cloud [server] 519. If the image [sent because of the request] is determined not to include nonpublic personal information (NPI), the system may display the unedited image 521 to other users. If the image [content] is determined to include NPI [sensitive content], the system may display an edited, or masked [one portion of the content is marked], image 523 to some or all other users.”);
wherein the server (Fig. 5, ¶¶ [0064]-[0065]) is configured to:
based on determining that the first computing device does not satisfy an authorized device criterion and…1 (Fig. 5, ¶¶ [0064]-[0065], “ If the image is determined not to include nonpublic personal information (NPI), the system may display the unedited image 521 to other users. If the image is determined to include NPI, the system may display an edited, or masked [because the first computing device does not satisfy an authorized device criterion], image 523 to some or all other users.”):
send, to the first computing device, a modified version of the content,…2 (Fig. 5, ¶¶ [0064]-[0065], “ If the image is determined not to include nonpublic personal information (NPI), the system may display the unedited image 521 to other users. If the image [content] is determined to include NPI, the system may display an edited [modified version], or masked, image 523 to some or all other users.”); and
3 …; and
wherein the second computing (Fig. 5, ¶¶ [0064]-[0065]) device is configured to:
receive, via the camera of the second computing device, a video footage (¶ [0025], “The central server may be configured to receive, from a first user device, an image that was scanned in via the camera of the first user [second computing] device. The image may be a single stationary image. The image may be a part of a moving image, for example, a still frame from a video [footage] clip.”) that depicts the modified version of the content being displayed on the first display device (Fig. 5, ¶¶ [0064]-[0065], “ User 501 may capture an image [content] of an object that may display sensitive information, such as [first] computing device 505 (e.g., smart phone, tablet, laptop, desktop, etc.), payment instrument 507 (e.g., a credit card), or document 509 (e.g., a contract or agreement).”; and “If the image is determined to include NPI, the system may display [on the first display device] an edited, or masked [of a modified version], image 523 to some or all other users.”);
4 …; and
5 ….
Uprit doesn’t disclose
1 …further based on determining that the second computing device satisfies the authorized device criterion:
2 …, wherein the modified version of the content comprises the at least one portion that is obfuscated;
3 send, to the second computing device, an unmodified version of the content;
4 modify, by the second computing device, the video footage by overlaying the unmodified version of the content at least partially over the modified version of the content as depicted in the video footage;
5 display, on the second display device, the modified video footage;
Gehler, however, discloses
1 …further based on determining that the second computing device satisfies the authorized device criterion (¶ [0088], “Conversely, example unfiltered content 400 is visible to the authenticated user [that satisfies the authorized device criterion] 306 while wearing the [second computing device] HMD 104.”):
2 …, wherein the modified version of the content comprises the at least one portion that is obfuscated (Fig. 2, ¶ [0056], “In the example of FIG. 2, if the content analysis circuitry 202 detects sensitive information identifier(s) 212 in portion(s) (e.g., a video frame, a page of a multi-age document) of the labeled content 136, the content modification circuitry 204 filters the portion(s) of the labeled content 136 associated with the sensitive information identifier(s) 212 to generate example filtered labeled content 220 for presentation via the primary user device 102. As a result of the filtering, one or more viewing properties of the sensitive information is adjusted such that the sensitive information is not. In some examples, the content modification circuitry 204 causes the content associated with the sensitive information identifier(s) 212 to be hidden, removed, modified [obfuscated] so as to become unreadable, etc.”);
3 send, to the second computing device, an unmodified version of the content (¶¶ [0067]-[0070], “For example, when the labeled content 136 includes a document, the secondary display management circuitry 208 can transmit [send] the unredacted [unmodified] portions of the document [unmodified version of the content] for presentation at the HMD [second computing device] 104 as augmented reality content.);
4 modify the video footage by overlaying the unmodified version of the content at least partially over the modified version of the content as depicted in the video footage (¶¶ [0067]-[0070], “For example, when the labeled content 136 includes a document, the secondary display management circuitry 208 can transmit the unredacted portions of the document for presentation at the HMD 104 as augmented reality content. As a result, when the user wears the HMD 104 and views the display screen 108 of the primary user device 102 through the HMD 104, the unredacted portions of the document [as video footage] appear to overlay the corresponding redacted portions of the filtered labeled content 220 displayed via the display screen 108 of the primary user device 102. In such instances, the secondary display management circuitry 208 transmits the portions of the document that are flagged with the sensitive information indicator(s) 214 for presentation. When the user of the HMD 104 views the content via the HMD 104, the sensitive information appears [via modifying the video footage] to be aligned with, overlaying, or replacing the redacted portions displayed via the primary user device 102. Thus, when the user wears the HMD 104, the filtered labeled content 220 displayed at the primary user device 102 is augmented with the sensitive information that is not otherwise visible via the primary user device 102.”; and ¶ [0059], “ For instance, the content filtering rule(s) 222 can define that when the metadata of a video [footage] frame includes a tag indicative of sensitive information, the content modification circuitry 204 should cause a blur filter or a scramble filter to be applied to the video frame.”);
5 display, on the second display device, the modified video footage (Fig. 4, ¶ [0068], “Thus, when the user wears the HMD [first computing device] 104, the filtered labeled content 220 displayed at the primary user device 102 is augmented [to create the modified video footage] with the sensitive information that is not otherwise visible via the primary user device 102.”);
Regarding Claim 16
Uprit in view of Gehler discloses the system of claim 15, and Uprit further discloses
wherein the authorized device criterion (Edwards ¶ [0052] “Referring now to FIG. 3b, when the section of the document is viewed by an authorized user on an authorized AR [first computing] device,…”) comprises a requirement that a device is at least one of: a wearable computing device, a head-mounted display device, a virtual reality display device, an augmented reality display device, or a mixed reality display device (¶ [0022], “ Each user device from the plurality of user devices may include a camera, a screen configured to provide a device user with an AR experience, and a communication interface. User devices may, for example, include smart glasses [wearable computing device], goggles, displays (such as a “heads-up display”), or any other suitable device capable of providing a user with an AR experience.”).
Regarding the combination of Uprit and Gehler, the rationale to combine is the same as provided for claim 15 due to the overlapping subject matter of claims 15 and 16.
Regarding Claim 17
Uprit-Gehler discloses the system of claim 15, and Uprit further discloses
wherein the at least one portion in the modified version of the content is obfuscated by performing at least one of: pixelating the at least one portion, blurring the at least one portion, blocking the at least one portion, or removing the at least one portion (Fig. 3a, Col. 6:54-7:7, “For example, the actual social security number is replaced by the C*nfidential marker 304, the actual DOB is replaced by the C*nfidential marker 306 and the actual address is replaced by redacted text 308 that also includes a C*nfidential marker.”).
Regarding the combination of Uprit and Gehler, the rationale to combine is the same as provided for claim 15 due to the overlapping subject matter of claims 15 and 17.
D. Claim 18 is rejected under 35 U.S.C. 103 as being unpatentable over Uprit in view of Gehler, and further in view of Khan and Syed.
Regarding Claim 18
Uprit-Gehler discloses the system of claim 15, and Uprit further discloses
wherein the second computing device is configured to {modify the video footage (Carter Figs. 3a-b, Col. 7:8-17)} by:
1 …; and
2 ….
Uprit-Gehler doesn’t disclose
1 determining a location, in a coordinate space associated with the second computing device, of an anchor for the first display device;
2 determining, based on the location of the anchor, a location, within the video footage, of the unmodified version of the content.
Khan, however, discloses
1 determining a location, …a associated with the second computing device, of an anchor for the first display device (¶ [0094], “At 502, the display submodule 330 of the computing device 200 obtains 2D display device location information indicating a location of the 2D display device 106. In order to [determine] anchor the virtual DUI [distributed user interface] screens (i.e. the secondary DUI screen and any additional DUI screens) in virtual space around the 2D display device [associated with the first computing device] 106, the HMD unit 116 may detect the device in its field of view using a camera of the HMD unit 116 and/or the HMD IMU 112.”; and ¶ [0110], “In particular, the virtual DUI screens displayed by the AR display [second computing] device 104 remain anchored to the physical display [device] (i.e. 2D display device 106) within the AR environment.”);
2 determining, based on the location of the anchor, a location, within the video footage, of the unmodified version of the content (¶ [0110], “The virtual DUI screens [possessing the video footage] are each assigned [determined] a [based on] location anchored to the 2D display device [having a location and possessing the portion of the content] 106.”).
Regarding the combination of Uprit-Gehler and Khan, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the AR security system of Uprit-Gehler to arrive at the claimed invention. KSR establishes that a rationale for obviousness is proven by showing a “use of [a] known technique to improve similar devices in the same way.” See MPEP § 2143(I)(C).
To substantiate the conclusion of obviousness under this KSR rationale, the Examiner finds pursuant to MPEP § 2143(I)(C):
1) the prior art contained a base system, namely the AR security system of Uprit-Gehler, upon which the claimed invention can be seen as an “improvement” through the use of a coordinate feature;
2) the prior art contained a “comparable” system, namely the AR system of Syed, that has been improved in the same way as the claimed invention through the coordinate feature; and
3) one of ordinary skill in the art could have applied the known improvement technique of applying the coordinate feature to the base AR security system of Uprit-Gehler, and the results would have been predictable to one of ordinary skill in the art.
Syed, however, discloses
a …in a coordinate space… (¶ [0036], “Spatial data associated with the one or more virtual objects may be determined. The spatial data associated with the one or more virtual objects may comprise data associated with the position in 3D space (e.g., x, y, z coordinates).”)
Regarding the combination of Uprit-Gehler-Khan and Syed, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the AR security system of Uprit-Gehler-Khan to arrive at the claimed invention. KSR establishes that a rationale for obviousness is proven by showing a “use of [a] known technique to improve similar devices in the same way.” See MPEP § 2143(I)(C).
To substantiate the conclusion of obviousness under this KSR rationale, the Examiner finds pursuant to MPEP § 2143(I)(C):
1) the prior art contained a base system, namely the AR security system of Uprit-Gehler-Khan, upon which the claimed invention can be seen as an “improvement” through the use of a coordinate feature;
2) the prior art contained a “comparable” system, namely the AR system of Syed, that has been improved in the same way as the claimed invention through the coordinate feature; and
3) one of ordinary skill in the art could have applied the known improvement technique of applying the coordinate feature to the base AR security system of Uprit-Gehler-Khan, and the results would have been predictable to one of ordinary skill in the art.
E. Claims 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Uprit in view of Gehler, and further in view of Khan.
Regarding Claim 19
Uprit-Gehler discloses the system of claim 15, and Uprit further discloses
wherein the first computing device (Fig. 5, ¶¶ [0064]-[0065]) is further configured to:
1 …; and
2 …; and
wherein the server (Fig. 5, ¶¶ [0064]-[0065]) is further configured to:
3 ….
Edwards further discloses
1 send, to the server, one or more locations of the at least one portion (Fig. 6, ¶ [0069], “Additionally, or alternatively, the computing device may transmit [send] its location to an authentication server (e.g., service server 305, intermediate server 315), which may store the computing device's location.”);
Regarding the combination of Uprit and Edwards, the rationale to combine is the same as provided for claim 15 due to the overlapping subject matter of claims 15 and 19.
Khan further discloses
2 information about the first display device (¶ [0095], “At 504, 2D display device display surface area information is obtained [by the server] by the computing device 200, e.g. by the DUI layout submodule 320 or the display submodule 330.”);
3 send the information to the second computing device (¶ [0029], “In some examples, displaying the view of the AR environment on the AR display device [first computing device] comprises: obtaining 2D display device [associated with a second computing device] display surface area information indicating [about] a display surface area of the 2D display device…”).
Regarding the combination of Uprit-Gehler and Khan, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the AR security system of Uprit-Gehler to arrive at the claimed invention. KSR establishes that a rationale for obviousness is proven by showing a “use of [a] known technique to improve similar devices in the same way.” See MPEP § 2143(I)(C).
To substantiate the conclusion of obviousness under this KSR rationale, the Examiner finds pursuant to MPEP § 2143(I)(C):
1) the prior art contained a base system, namely the AR security system of Uprit-Gehler, upon which the claimed invention can be seen as an “improvement” through the use of a device information feature;
2) the prior art contained a “comparable” system, namely the AR system of Khan, that has been improved in the same way as the claimed invention through the device information feature; and
3) one of ordinary skill in the art could have applied the known improvement technique of applying the device information feature to the base AR security system of Uprit-Gehler, and the results would have been predictable to one of ordinary skill in the art.
Regarding Claim 20
With respect to dependent claim 20, a corresponding reasoning as given earlier for dependent claim 10 applies, mutatis mutandis, to the subject matter of claim 20. Therefore, claim 20 is rejected, for similar reasons, under the grounds set forth for claim 10.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to D'ARCY WINSTON STRAUB whose telephone number is (303)297-4405. The examiner can normally be reached Monday-Friday 9:00-5:00 Mountain Time.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, WILLIAM KORZUCH can be reached at (571)272-7589. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/D'Arcy Winston Straub/Primary Examiner, Art Unit 2491