DETAILED ACTION
The present Office Action is in response to an application filed on 10/03/2024 wherein claims 1-20 are pending and ready for examination.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Applicant’s claim for the benefit of a prior-filed provisional application under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, 365(c), or 386(c) is acknowledged.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 2, 5, 6, 9, 10, 12, 13, 14, 17, 18 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Song et al. (CN112434327A – citations are based on English translation provided), hereinafter Song, in view of Grossman et al. (US 20190182549 A1), hereinafter Grossman.
Regarding claim 1, Song discloses a method comprising:
receiving, using an application server (“server 105”), formatting information that is associated with sensitive information, from a computing device (when a user uses a browser, shopping app, or financial app on their mobile phone, the terminal device or server can read and identify the content of the page to be displayed in advance to determine whether there is sensitive information – see [n0050]; when sensitive information is detected on the page to be displayed, the presentation format of the sensitive information can also be identified, and corresponding conversion strategies can be executed – see [n0063]; see also [n0057-58]);
generating, using the application server and based on the formatting information, a video (in step S210, sensitive information in the page to be displayed is obtained – see [n0048]; in step S220, the sensitive information is converted to obtain streaming media with DRM rules applied – see [n0052]; see also [n0049-51]; the presentation format of the sensitive information can also be identified, and corresponding conversion strategies can be executed – see [n0063]; examiner’s note: when sensitive data is stored in the cloud the server can convert and encrypt the sensitive data in the cloud to obtain streaming media; and, streaming media refers to video stream data, as discussed in [n0056-57]); and
transmitting, using the application server, the video to the computing device (In step S230, the original control corresponding to the sensitive information is replaced with a streaming media playback control that applies DRM rules, so as to play the streaming media that applies DRM rules using the streaming media playback control – see [n0053]; the server performs format conversion, encryption and watermarking on the sensitive information in the cloud to generate video stream data containing the data content corresponding to the sensitive information; on the terminal device side, the APP directly receives the video stream data processed in the cloud; at the same time, the original controls corresponding to sensitive information on the page to be displayed will be replaced with streaming media playback controls that apply DRM rules, and the streaming media playback controls, i.e., streaming media players, will be used to play the received video stream data containing the data content corresponding to the sensitive information – see [n0057]).
Song does not explicitly disclose that the video includes a single image frame, wherein the single image frame represents the sensitive information. In other words, Song does not explicitly disclose how many frames are included in the video.
However, Grossman discloses a method of protecting digital content from screen capture operation by converting the digital content format into a content-protected format, streaming the digital content from a local content server to a local content client implemented on a user device and causing the user device to display the digital content (see abstract, [0049-55], FIG. 4), including wherein the video includes a single image frame, wherein the single image frame represents the sensitive information (at 410, the text data of the digital content is converted into an image of text – see [0050]; at 420, the image of text is converted into video content in a content-protected format, for example, by generating video content for which the image of text is included as a frame of video – see [0051]; at 430, screenshot-proof video display or playback begins – see [0052]; at 440, … to determine when playback begins, the local content client may wait for … a callback for when the playback has reached the end (for example, if the video content includes a single video frame) – see [0053]; examiner’s note: in this case, the text data content that needs to be protected is converted into video content, and the video content may include a single video frame).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method in Song to include that the video includes a single image frame, wherein the single image frame represents the sensitive information, as taught by Grossman. One would have been motivated to make such a combination because text data may be converted into video content by rendering the text data as one or more frames of the video content; it may be rendered into multiple images of text that are converted into different frames of the video content, and the frames may be viewed sequentially, for example, by selecting specific frames to display statically or by frame-by-frame scrubbing during playback, as recognized by Grossman (see [0054]).
Regarding claim 2, Song and Grossman disclose all the claimed subject matter recited in claim 1 above. Furthermore, Song discloses the method, wherein receiving, using the application server, the formatting information that is associated with the sensitive information, from the computing device, comprises: receiving, using the application server, the formatting information that is associated with the sensitive information, (when sensitive information is detected on the page to be displayed, the presentation format of the sensitive information can also be identified, and corresponding conversion strategies can be executed – see [n0063]; this solution can effectively utilize DRM rules to provide the function of prohibiting screenshots and screen recording when the page contains sensitive information, even though the iOS system itself does not provide such APIs and does not use any private APIs - see also [n0041], [n0057-58]; examiner’s note: iOS does not provide a public API for preventing screenshots and screen recording, thus the disclosure implies implementing this technical solution via public API, as discussed in [n0041] and [n0096]).
Song does not explicitly disclose receiving, using the application server, the formatting information via an application programming interface system.
Grossman discloses receiving, using the application server, the formatting information via an application programming interface system (the text data may be converted into an image of text, for example, using one of the APIs available to iOS, including, but not limited to: [NSString drawInRect:withAttributes:]; [UIView drawViewHierarchyInRect:afterScreenUpdates:]; and [CALayer renderInContext:] – see [0050]; the image of text may be converted, for example, using one of the APIs available to iOS, including, but not limited to, the AVAssetWriter, VideoToolbox, and AVFoundation APIs – see [0051]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method in Song to include that receiving, using the application server, the formatting information via an application programming interface system, as taught by Grossman. One would have been motivated to make such a combination because Song discloses identifying the presentation format of the sensitive data and executing conversion strategies according to this format, while Grossman explicitly uses APIs to perform the conversion, thus it would be obvious for one skilled in the art to recognize that the format information is received via the APIs in order to execute an appropriate conversion strategy in order to allow for applications to mark certain content to not be included in screenshots without having access to the screenshot generation process and in the failure of the operating system to more explicit, easier to use, screenshot-proof functionality, as recognized by Grossman (see [0014]).
Regarding claim 5, Song and Grossman disclose all the claimed subject matter recited in claim 1 above.
Furthermore, Song discloses the method, further comprising: receiving, using the application server, a request for the sensitive information from the computing device; and retrieving the sensitive information (when a user uses a browser, shopping app, or financial app on their mobile phone, the terminal device or server can read and identify the content of the page to be displayed in advance to determine whether there is sensitive information; sensitive information can be account information, passwords and corresponding input fields, or identity information such as identification documents, or text information such as transaction amounts and balances; alternatively, it can be image information such as QR codes and identification documents and corresponding input fields, etc. – see [0050]; since videos protected by DRM rules cannot be screenshotted or screen-recorded, during the playback of video streams containing sensitive information, users can not only browse the information normally, but also screen recording via development tools such as Xcode is impossible, whether inside or outside the application; this effectively achieves the goal of protecting sensitive information – see [n0084]; examiner’s note: shopping and financial applications and transaction information are inherently/effectively requested by the user when accessed) .
Regarding claim 6, Song and Grossman disclose all the claimed subject matter recited in claim 1 above.
Song does not explicitly disclose the method, further comprising: receiving the sensitive information from an application programming interface system (see [n0041] and [n0096]).
Grossman discloses the method, further comprising: receiving the sensitive information from an application programming interface system (the text data may be converted into an image of text, for example, using one of the APIs available to iOS, including, but not limited to: [NSString drawInRect:withAttributes:]; [UIView drawViewHierarchyInRect:afterScreenUpdates:]; and [CALayer renderInContext:] – see [0050]; the image of text may be converted, for example, using one of the APIs available to iOS, including, but not limited to, the AVAssetWriter, VideoToolbox, and AVFoundation APIs – see [0051]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method in Song to include that the method, further comprising: receiving the sensitive information from an application programming interface system, as taught by Grossman. One would have been motivated to make such a combination because Song discloses identifying the presentation format of the sensitive data and executing conversion strategies according to this format, while Grossman explicitly uses APIs to perform the conversion, thus it would be obvious for one skilled in the art to recognize that the sensitive information is received from the APIs after proper conversion in order to protect the sensitive information, as recognized by Grossman (see [0014]).
Regarding claim 9, Song and Grossman disclose all the claimed subject matter recited in claim 1 above. Furthermore, Song discloses the method, wherein the sensitive information represents static information (sensitive information may be text, images, or other forms of information and content on the page that need to be kept confidential by the user and not disclosed to others - see [n0050]).
Regarding claim 10, Song and Grossman disclose all the claimed subject matter recited in claim 1 above. Furthermore, Song discloses the method, wherein the video is encrypted and configured to play the image frame in a loop (the server can convert and encrypt the sensitive data in the cloud to obtain streaming media containing the corresponding sensitive information and DRM rules, i.e., video stream data – see [n0056]; the terminal device receives video stream data that has been converted and encrypted by the cloud, and plays it on the page to be displayed using the replaced streaming media player control – see [n0057]; the playback parameters of the player can be pre-configured to loop the video stream and achieve the effect of displaying sensitive information content in the control – see [n0085]; see also [n0015] and [n0106]).
Regarding claim 12, Song and Grossman disclose all the claimed subject matter recited in claim 1 above. Furthermore, Song discloses the method, further comprising: storing, using the application server, the video in a storage component (for terminal devices, the original control can be replaced with a streaming media playback control that applies DRM rules, so that the streaming media playback control can be used to play video stream data received from cloud storage – see [n0056]; examiner’s note: video is received from cloud storage, meaning video data has been stored).
Regarding claim 13, Song discloses a system comprising: at least one processor; and at least one memory having programming instructions stored thereon, which, when executed by the at least one processor, cause the system to perform operations (as shown in Figure 13, the computer system 1300 includes a central processing unit (CPU) 11301, which can perform various appropriate actions and processes according to a program stored in a read-only memory (ROM) 11302 or a program loaded from a storage section 1308 into a random access memory (RAM) 1303 – see [n0110], FIG. 13).
The remaining limitations of claim 13 are similar in scope to those of claim 1. Therefore, claim 13 is rejected for the same reasons as set forth in the rejection of claim 1 above.
Regarding claim 14, all limitations correspond to the system performing the method of claim 2. Therefore, claim 14 is being rejected on the same basis as claim 2.
Regarding claim 17, all limitations correspond to the system performing the method of claim 5. Therefore, claim 17 is being rejected on the same basis as claim 5.
Regarding claim 18, all limitations correspond to the system performing the method of claim 6. Therefore, claim 18 is being rejected on the same basis as claim 6.
Regarding claim 20, all limitations correspond to the method of claims 1 and 2. Therefore, claim 20 is being rejected on the same basis as claims 1 and 2.
Claims 3, 4, 15 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Song et al. (CN112434327A), and Grossman et al. (US 20190182549 A1), as applied to claim 1 above, and further in view of Horton (US 20150178476 A1).
Regarding claim 3, Song and Grossman disclose all the claimed subject matter recited in claim 1 above.
Furthermore, Song discloses pre-rendering the page containing the text data to obtain a corresponding intermediate image when the sensitive information is text data, and to convert the intermediate image to obtain streaming media data for applying DRM rules – see [n0013] – and, using appropriate conversion strategies based on the presentation format – see [n0063].
Song does not explicitly disclose the presentation format represents metadata.
Grossman discloses converting text data into an image using APIs including [NSString drawInRect:withAttributes:]; [UIView drawViewHierarchyInRect:afterScreenUpdates:]; and [CALayer renderInContext:] – see [0050].
Grossman fails to explicitly disclose the attributes represent metadata.
However, Horton discloses a method of monitoring font usage is provided whereby fonts are monitored on a distributed computer network (see abstract) the method including wherein the formatting information represents metadata (the font Analyzer identifies the format of the font file (e.g. .otf, .eot, .ttf, .woff etc) and at step 406 it reads or interprets the file and extracts useful metadata which can be recorded as font attributes – [0056]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method in Song to include that wherein the formatting information represents metadata, as taught by Horton. One would have been motivated to make such a combination because formatting information includes attributes related to a plurality of fonts, website elements and others that are commonly known as metadata, as recognized by Horton (see [0013-14]).
Regarding claim 4, Song and Grossman disclose all the claimed subject matter recited in claim 1 above. Furthermore, Song discloses the method, (in step S210, sensitive information in the page to be displayed is obtained – see [n0048]; in step S220, the sensitive information is converted to obtain streaming media with DRM rules applied – see [n0052]; when sensitive information is detected on the page to be displayed, the presentation format of the sensitive information can also be identified, and corresponding conversion strategies can be executed – see [n0063]; examiner’s note: sensitive information is obtained in order to convert to video; also, presentation format information is identified in order to execute conversion to video, thus this information is received prior to video generation).
Song and Grossman fails to explicitly disclose the formatting information represent metadata.
However, Horton discloses a method of monitoring font usage is provided whereby fonts are monitored on a distributed computer network (see abstract) the method including wherein the formatting information represents metadata (the font Analyzer identifies the format of the font file (e.g. .otf, .eot, .ttf, .woff etc) and at step 406 it reads or interprets the file and extracts useful metadata which can be recorded as font attributes – [0056]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method in Song to include that wherein the formatting information represents metadata, as taught by Horton. One would have been motivated to make such a combination because formatting information includes attributes related to a plurality of fonts, website elements and others that are commonly known as metadata, as recognized by Horton (see [0013-14]).
Regarding claim 15, all limitations correspond to the system performing the method of claim 3. Therefore, claim 15 is being rejected on the same basis as claim 3.
Regarding claim 16, all limitations correspond to the system performing the method of claim 4. Therefore, claim 16 is being rejected on the same basis as claim 4.
Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Song et al. (CN112434327A), and Grossman et al. (US 20190182549 A1), as applied to claim 1 above, and further in view of Weldon et al. (US 20170004331 A1), hereinafter Weldon.
Regarding claim 7, Song and Grossman disclose all the claimed subject matter recited in claim 1 above.
Furthermore, Song discloses the method, wherein receiving, using the application server, the formatting information that is associated with the sensitive information, from the computing device (when a user uses a browser, shopping app, or financial app on their mobile phone, the terminal device or server can read and identify the content of the page to be displayed in advance to determine whether there is sensitive information – see [n0050]; when sensitive information is detected on the page to be displayed, the presentation format of the sensitive information can also be identified, and corresponding conversion strategies can be executed – see [n0063]; see also [n0057-58])
Grossman discloses that the user interface may be a web browser interface that can access, retrieve, present, and/or navigate content (e.g., web pages such as Hyper Text Markup Language (HTML) pages) provided by the content server or a standalone application (e.g., a mobile “app,” etc.), that enables a user to use the user device to send/receive information to/from other user devices, the data store, and the content server – see [0026].
Song and Grossman fail to disclose the method, wherein receiving, using the application server, the formatting information comprises: receiving, using the application server, a portion of a webpage from the computing device, wherein the portion of the webpage includes a portion of a HyperText Markup Language (HTML) page, one or more Cascading Style Sheet (CSS) rules, Javascript, and the formatting information.
However, Weldon discloses a method for enabling a displayed webpage containing sensitive information to be accurately and efficiently sanitized (see abstract) including receiving, using the application server, a portion of a webpage from the computing device, wherein the portion of the webpage includes a portion of a HyperText Markup Language (HTML) page, one or more Cascading Style Sheet (CSS) rules, Javascript, and the formatting information (style sheets 120 may be implemented using a style sheet language such as a Cascading Style Sheets (CSS) to contain style rules that specify the presentation, format, and visual layout of the content within webpages 118 - [0029]; webpage 118A may contain content portion 202, sanitization module 214, data retrieval module 216, and tag update module 218; content portion 202 may be implemented using a markup language, such as HTML, to specify the semantic content (e.g., image 204 and text string 206) and the structure of the content within webpage 118A to be displayed – see [0030]; one or more of the modules 214-218 may be implemented via server-side and/or client-side scripts to enable a user to interact with content portion 202 and sanitize sensitive content.; client-side scripts may be written in an interpretive language, for example, JAVASCRIPT and are executed by client applications such as web-based application 104 on device 102A – see [0034]; in step 606, a style language parser, such as a CSS parser, within web-based application 104A may parse or translate the style rules from webpage 118A and/or associated style sheets 120 to construct a CSS Object Model (CSSOM) tree – see [0058]; upon a determination to sanitize content, a sanitization function or variable within sanitization module 214 may be invoked or set, respectively – see [0063]; see [0055-69]; examiner’s note: this information is identified in order to sanitize sensitive information).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method in Song to include receiving, using the application server, a portion of a webpage from the computing device, wherein the portion of the webpage includes a portion of a HyperText Markup Language (HTML) page, one or more Cascading Style Sheet (CSS) rules, Javascript, and the formatting information, as taught by Weldon. One would have been motivated to make such a combination because to accurately and efficiently sanitize the displayed content, as recognized by Weldon (see [0018]).
Claims 8 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Song et al. (CN112434327A), Grossman et al. (US 20190182549 A1), and Weldon et al. (US 20170004331 A1), as applied to claim 7 above, and further in view of Zhou (CN 109151520 A – citation based on English translation provided).
Regarding claim 8, Song, Grossman and Weldon disclose all the claimed subject matter recited in claim 7 above.
Song, Grossman and Weldon fail to disclose the method, wherein generating, using the application server, the video is further based on the portion of the HTML page, the one or more CSS rules, the Javascript, and a headless browser.
However, Zhou discloses a method for generating video, device, electronic apparatus, and a medium, wherein the method is applied in a headless browser (see abstract) including wherein generating, using the application server, the video is further based on the portion of the HTML page, the one or more CSS rules, the Javascript, and a headless browser (extract the display data from the generated data, and generate the video to be processed based on the display data; the display data includes visualization data and audio data; the visualization data includes HTML data, image data, CSS cascading style sheet data, and javascript data – see [0011]; the generated data uploaded by the user terminal can be used to generate and display videos in a headless browser – see [0083]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method in Song to include wherein generating, using the application server, the video is further based on the portion of the HTML page, the one or more CSS rules, the Javascript, and a headless browser, as taught by Zhou. One would have been motivated to solve the drawback of consuming computing resources and network bandwidth (headless browsers are faster and use less resources), as recognized by Zhou (see [0083]).
Regarding claim 19, all limitations correspond to the system performing the method of claims 7 and 8. Therefore, claim 19 is being rejected on the same basis as claims 7 and 8.
Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Song et al. (CN112434327A), and Grossman et al. (US 20190182549 A1), as applied to claim 1 above, and further in view of Pantazelos (US 20180083978 A1).
Regarding claim 11, Song and Grossman disclose all the claimed subject matter recited in claim 1 above.
Furthermore, Song discloses that videos protected by DRM rules cannot be screenshotted or screen-recorded (since videos protected by DRM rules cannot be screenshotted or screen-recorded, during the playback of video streams containing sensitive information, users can not only browse the information normally, but also screen recording via development tools such as Xcode is impossible, whether inside or outside the application; this effectively achieves the goal of protecting sensitive information – see [n0084]).
Song fails to disclose the method, wherein the video is configured to not play on a display screen when the display screen is being screenshared or screenshotted, based on the digital rights management technology.
Grossman discloses the method, wherein the video is configured to (screenshot captures of DRM-protected videos are absent of any video content - see [0015-17]).
Grossman fails to explicitly disclose the video is configured to not play on a display screen when the display screen is being screenshared or screenshotted.
However, Pantazelos discloses a method for conditional delivery of electronic content such as images or video stream content over a communication network (see abstract) including the method, wherein the video is configured to not play on a display screen when the display screen is being screenshared or screenshotted (the application has detected the screenshot and stopped the playing of the video – see [0066]).
Thus, Song, Grossman and Pantazelos each disclose preventing exposure of sensitive video content during screensharing or screenshotting. Song discloses that DRM-protected videos cannot be screenshared or screenshotted. Grossman discloses that if DRM-protected videos are being screenshared or screenshotted, the screenshot is captured but is absent of any of the video content. Pantazelos discloses that if screenshotting is detected, the video content stops playing. A person of ordinary skill in the art before the effective filing date of the claimed invention would have recognized that if DRM-protected videos cannot be screenshared or screenshotted, as taught by Song, the screenshot/sharing may be absent of the video content, as taught by Grossman, or video content may not be played, as taught by Pantazelos. Furthermore, a person of ordinary skill in the art would have been able to carry out the modification. Finally, the modification had a reasonable expectation of success due to the fact that both solutions prevent the sensitive information from being leaked when the screen capture or screen sharing is detected.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method in Song to include the method, wherein the video is configured to not show or be absent when the display screen is being screenshared or screenshotted, based on the digital rights management technology, as taught by Grossman; and, the method, wherein the video is configured to not play on a display screen when the display screen is being screenshared or screenshotted , as taught by Pantazelos. One would have been motivated to make such a combination a reasonable expectation of success to improve prevention of sensitive information being leaked when the screen capture or screen sharing is detected, as recognized by Grossman (see [0014-17]) and Pantazelos (see [0008]).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DORIANNE ALVARADO DAVID whose telephone number is (571)272-4228. The examiner can normally be reached 9:00am-5:00pm ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Philip Chea can be reached at (571) 272-3951. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DORIANNE ALVARADO DAVID/Examiner, Art Unit 2499 /PHILIP J CHEA/Supervisory Patent Examiner, Art Unit 2499