Prosecution Insights
Last updated: April 19, 2026
Application No. 18/050,116

EDGE VIDEO STREAM ENCODING WITH ENCRYPTING OF CONFIDENTIAL CONTENT

Non-Final OA §103
Filed
Oct 27, 2022
Examiner
MUNGUIA, DUILIO
Art Unit
2497
Tech Center
2400 — Computer Networks
Assignee
International Business Machines Corporation
OA Round
1 (Non-Final)
100%
Grant Probability
Favorable
1-2
OA Rounds
3y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants 100% — above average
100%
Career Allow Rate
5 granted / 5 resolved
+42.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
25 currently pending
Career history
30
Total Applications
across all art units

Statute-Specific Performance

§101
6.0%
-34.0% vs TC avg
§103
69.3%
+29.3% vs TC avg
§102
16.7%
-23.3% vs TC avg
§112
8.0%
-32.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 5 resolved cases

Office Action

§103
Detailed Action Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The references cited in the Information Disclosure Statements (IDS) filed on 10/27/2022 have been considered. Specification The lengthy specification has not been checked to the extent necessary to determine the presence of all possible minor errors. Applicant’s cooperation is requested in correcting any errors of which applicant may become aware in the specification. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 10 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Le Barz et al. (US-20110075842-A1 hereafter Le Barz), in view of Mori et al. (US-20110222687-A1 hereafter Mori). Regarding claim 1 Le Barz teaches a computer-implemented method comprising: obtaining, by an edge device, a video stream (see Le Barz fig 6, PNG media_image1.png 390 630 media_image1.png Greyscale , par.0057 : “The processing unit 10 (comprises a tool for detecting portions of the video stream or of the image to be encrypted”); partitioning, by the edge device, image content of the video stream into a confidential part and a non-confidential part (see Le Barz par.0057: “11 suitable for determining various areas of interest of the image, that is to say, the areas of the image that must be encrypted. The unit 10 includes a module 13 for encrypting the identified portions, and a buffer memory 12 which receives the part of the encrypted stream together with the unencrypted other parts of the stream in a partially encrypted stream”); encrypting, by the edge device, the confidential part of the image content to obtain encrypted image content from the confidential part and non-encrypted image content from the non-confidential part of the image content (see Le Barz par.0057: “The unit 10 includes a module 13 for encrypting the identified portions, and a buffer memory 12 which receives the part of the encrypted stream together with the unencrypted other parts of the stream in a partially encrypted stream”); encoding, by the edge device, the encrypted image content and the non-encrypted image content into an encoded video stream (see Le Barz par.0057: “module 14 that makes it possible to merge the two parts of the stream, the encrypted and compressed part and the unencrypted and compressed part.”); Le Barz appear to be silence however teaches and transmitting, by the edge device, the encoded video stream to one or more processing servers. (See Mori par.0224: “an encoder 12 (edge device) for transmitting an video image from a camera as an encoded image data, a server 14 for storing the image data such as still image data and video data and distributing the data as needed to a connected terminal, and a set top box (STB) 17 connected to the server 14 and the encoder 12 to receive the distribution of the image data are alternately connected to each other.”, par.0275: “an image data captured by a camera is encoded by the encoder 12 and is transmitted to the server 14, where the image data is subjected to an encryption process to be stored”). Therefore, it would have been obvious to one or ordinary skill in the art before the effective filing date of the claim invention to combine the teaching of Le Barz “a system that makes it possible to visually encrypt a video sequence, characterized in that it comprises at least one video processing unit suitable for executing the steps of the method described previously comprising at least the following elements: a means suitable for identifying the portions of the video sequence that will be encrypted, a module for encrypting the identified portions, a buffer memory which receives the other parts of the stream that are not encrypted, a module suitable for merging the part of the compressed encrypted stream and the part of the unencrypted and compressed stream”, (see Le Barz par.0027-0031), with Mori teaching “The encoder 12 uses the common key for delivering a content key which the encoder 12 exchanged with the server 14 to encrypt the content key, and transmits the encrypted content key to the server 14.”, (see Mori par.0250). Regarding claim 10, is directed to a computer system, reciting the same reasons as set forth in the rejections of claim 1, respectively. Therefore, claim 10 is rejected for the same reasons as set forth in the rejections of claim 1 above, respectively. a memory (see Le Barz par.0027 a buffer memory); and at least one processor in communication with the memory, wherein the computer system is configured to perform a method (see Le Barz par.0027: “video processing unit suitable for executing the steps of the method described previously comprising at least the following elements: a means suitable for identifying the portions of the video sequence that will be encrypted, a module for encrypting the identified portions, a buffer memory which receives the other parts of the stream that are not encrypted,”), the method comprising: Regarding claim 17 is a computer program product claim that recites similar limitations as the computer-implemented method claim 1 and is rejected based on the same rational as claim 1. one or more computer readable storage media and program instructions collectively stored on the one or more computer readable storage media to perform a method comprising (see Mori par.0065: “a CPU 24 for control of each component of the encoder 12 and for transmission of the encoded video data and the encoded audio data from the MPEG encoder 23 via an NIC (Network Interface Card) 26; and a RAM 25 for temporal storage of the data.”): Claims 2 - 3, 11 – 12, and 18 - 19 are rejected under 35 U.S.C. 103 as being unpatentable over Le Barz et al. (US-20110075842-A1 hereafter Le Barz), in view of Mori et al. (US-20110222687-A1 hereafter Mori), in further view of Shanmugam et al. (US-20240089537-A1hereafter Shanmugam). Regarding claim 2 Le Barz in view of Mori disclose the computer-implemented method of claim 1, Le Barz in view of Mori appear to be silence however Shanmugam teaches wherein the edge device comprises a machine-learning-based partitioning layer to partition the image content of the video stream into the confidential part and the non-confidential part based on identifying one or more confidential objects in the image content of the video stream. (See Shanmugam par.0026: “a projector or other display system may selectively mask sensitive information in a display of a shared content stream. A projector or other device in communication with a projector can access a content stream to be shared, analyze the content using a machine-learned model to identify sensitive information within the content, and render a display of the content for projection to other viewers such that the sensitive information is not visible. For instance, the projector can mask the sensitive information by hiding, replacing, or supplementing the content such that the sensitive information is not visible in a projected display of the content.”, par.0063-0064: “a videoconferencing system, projector system, or other content sharing system (edge device) can include a machine-learned sharing system 402 that is configured to analyze an input content stream that is shared by a first user, identify sensitive information within the content, and render a display of the content for others such that the sensitive information is not visible. For instance, the videoconferencing system can mask the sensitive information by hiding, replacing, or supplementing the content such that the sensitive information is not visible in the display of the content for other attendees of the videoconference. Machine-learned sharing system 402 can be provided for selectively masking sensitive information in content that is to be shared by a first user with other users, such as participants in a video conference or viewers of a projected display. The machine-learned sharing system 402 can receive an input content stream such as a web page, document, video, slideshow, etc. to be shared by the first user. The sharing system can input the content stream into a machine-learned masking system 404 that is configured to identify sensitive information within the content to be shared. The machine-learned masking system 404 can include one or more machine-learned models that are trained to identify sensitive information such as personal, confidential, and/or proprietary information within content. The machine-learned model(s) can be trained to identify different types of sensitive information by training the model(s) using training data that includes content labeled to identify the target sensitive information to be located. The machine-learned model can detect one or more regions in the shared content that contain sensitive information and generate one or more masks that represent the one or more region(s). The system can render or otherwise provide a modified content stream 408 that masks the detected regions so that the sensitive information is not visible. The system can mask the sensitive information by hiding, replacing, covering, obscuring, or otherwise causing the sensitive information not be visible in display of the content that is viewable by the additional users.”) Therefore, it would have been obvious to one or ordinary skill in the art before the effective filing date of the claim invention to combine the teaching of claim 1 with Shanmugam teaching “the machine-learned model 504 can partition or otherwise divide an image frame into a set of logical partitions representing content portions of the source content. The logical partitions can be generated using image segmentation or other techniques. The model 504 can identify one or more of the logical partitions corresponding to the sensitive content. The mask can be generated in accordance with the one or more logical partitions. For example, the mask can identify one or more regions of the source content that correspond to the one or more logical partitions”, (see Shanmugam par.0069). Regarding claim 11 is a computer system claim that recites similar limitations as the computer-implemented method claim 2 and is rejected based on the same rational as claim 2. Regarding claim 18 is a computer program product claim that recites similar limitations as the computer-implemented method claim 2 and is rejected based on the same rational as claim 2. Regarding claim 3 Le Barz in view of Mori, and Shanmugam disclose the computer-implemented method of claim 2, Shanmugam further disclose further comprising masking the one or more confidential objects in the image content of the video stream to produce the nonencrypted image content from the non-confidential part of the image content . (See par.0064: “The machine learned model can detect one or more regions in the shared content that contain sensitive information and generate one or more masks that represent the one or more region(s). The system can render or otherwise provide a modified content stream 408 that masks the detected regions so that the sensitive information is not visible. The system can mask the sensitive information by hiding, replacing, covering, obscuring, or otherwise causing the sensitive information not be visible in display of the content that is viewable by the additional users.”, par.0041: “The system can first provide the content to a machine-learned system that has been trained to identify such content and automatically mask it from visibility in a display of the content to other users. Further, by masking the source content, the sharing user can view the source content in its entirety while other users only view the non-sensitive portions of the content.”). Therefore, it would have been obvious to one or ordinary skill in the art before the effective filing date of the claim invention to combine the teaching of claim 2 with Shanmugam teaching “The system can identify particular regions within content that contain sensitive information and precisely mask those regions from display to others. In this manner, the system can enhance privacy and security when sharing content by offloading the identification and modification tasks from a sharing user. The system can first provide the content to a machine-learned system that has been trained to identify such content and automatically mask it from visibility in a display of the content to other users. Further, by masking the source content, the sharing user can view the source content in its entirety while other users only view the non-sensitive portions of the content.”, (see Shanmugam par.0041). Regarding claim 12 is a computer system claim that recites similar limitations as the computer-implemented method claim 3 and is rejected based on the same rational as claim 3. Regarding claim 19 is a computer program product claim that recites similar limitations as the computer-implemented method claim 3 and is rejected based on the same rational as claim 3. Claims 4, 13, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Le Barz et al. (US-20110075842-A1 hereafter Le Barz), in view of Mori et al. (US-20110222687-A1 hereafter Mori), in further view of Corda et al. (US-9414095-B1 hereafter Corda). Regarding claim 4 Le Barz in view of Mori disclose the computer-implemented method of claim 1, Le Barz in view of Mori appear to be silence however Corda teaches wherein the transmitting comprises multicasting, by the edge device, the encoded video stream to multiple processing servers with different access rights to the encrypted image content of the encoded video stream. (See Corda Col.5 lines 4-59: “satellite operator NOC 108 (edge device) encrypts the IP video streams, for example for conditional access and/or digital rights management (DRM) purposes…. the encrypted video streams are then transmitted, as multicast video streams, through an uplink segment 110, a satellite 112, and a downlink segment 114, to one or more cable television headend 116 (Processing server). Only one cable television headend 116 is illustrated in FIG. 1, but there may be any number of cable television headends 116, geographically dispersed over an area covered by the satellite 112… a cable television headend 116, the video streams may optionally, in one embodiment, be decrypted and re-encrypted at the cable television headend 116, for controlling which cable subscribers may access the content.”) Therefore, it would have been obvious to one or ordinary skill in the art before the effective filing date of the claim invention to combine the teaching of claim 1 with Corda teaching “. Video programs, or segment(s) thereof, received from at least one content source, are encoded and encapsulated into Internet Protocol (IP) video streams. The IP video streams are then transmitted to a satellite operator network operation center (NOC), which encrypts the streams. The streams are then transmitted, as multicast video streams, through a satellite, to one or more cable television headend(s). The cable television headend(s) in turn transmits the video streams as multicast video streams, over an IP-based network, to cable modems located within subscriber premises. At one point in time, an IP-enabled end user device then transmits, to a gateway associated with one of the cable modems, a command requesting that the end user device obtain one of the video streams”, (see Corda Col.2 lines 3-16). Regarding claim 13 is a computer system claim that recites similar limitations as the computer-implemented method claim 4 and is rejected based on the same rational as claim 4. Regarding claim 20 is a computer program product claim that recites similar limitations as the computer-implemented method claim 4 and is rejected based on the same rational as claim 4. Claims 5 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Le Barz et al. (US-20110075842-A1 hereafter Le Barz), in view of Mori et al. (US-20110222687-A1 hereafter Mori), in view of Corda et al. (US-9414095-B1 hereafter Corda), in further view of Takatsuka et al. (US-20220375197-A1 hereafter Takatsuka). Regarding claim 5 Le Barz in view of Mori, and Corda disclose the computer-implemented method of claim 4, Le Barz in view of Mori, and Corda appear to be silence however Takatsuka teaches wherein the encrypting comprises separating the confidential part of the image content into multiple levels of confidential content, and separately encrypting the multiple levels of confidential content using different confidential level encryption keys to obtain multiple levels of confidential encrypted image content. (See Takatsuka par.0206: “In the image-capturing apparatus 1 of this example, encryption is performed on a specified target region in image data. Specifically, the entirety of an image and a target region are respectively encrypted using different encryption keys, and, from among the target region, a region of a specific portion and a region other than the region of the specific portion are respectively encrypted using different encryption keys. This results in the level of confidentiality of information being gradually changed according to a decryption key held by a recipient of the image (encryption for each level).”, par.0226-0229: “At the level 0, an encrypted image is not decrypted by a recipient of an image, and this results in obtaining the image in which the entirety of a region is encrypted. At the level 1, the recipient of the image can decrypt a portion of a region other than the target region AT using the first encryption key, and this results in obtaining the image in which only the target region AT is encrypted. At the level 2, the recipient of the image can decrypt a portion of a region other than the specific region AS using the combining key obtained by combining the first and second encryption keys, and this results in obtaining the image in which only the specific region AS in the target is encrypted. At the level 3, the recipient of the image can decrypt the entirety of a region in the image using the combining key obtained by combining the first, second, and third encryption keys, and this results in obtaining the image in which no information is kept confidential.”). Therefore, it would have been obvious to one or ordinary skill in the art before the effective filing date of the claim invention to combine the teaching of claim 4 with Takatsuka teaching “at least three types of encryption keys are generated, the three types of encryption keys being a first encryption key that corresponds to an encryption of the entirety of a region in an image, a second encryption key that corresponds to only an encryption of the target region AT, and a third encryption key that corresponds to only an encryption of the specific region AS.”, (see Takatsuka par.0214). Regarding claim 14 is a computer system claim that recites similar limitations as the computer-implemented method claim 5 and is rejected based on the same rational as claim 5. Claims 6 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Le Barz et al. (US-20110075842-A1 hereafter Le Barz), in view of Mori et al. (US-20110222687-A1 hereafter Mori), in view of Corda et al. (US-9414095-B1 hereafter Corda), in further view of Takatsuka-2 et al. (US-20220271930-A1 hereafter Takatsuka-2). Regarding claim 6 Le Barz in view of Mori, and Corda disclose the computer-implemented method of claim 4, Le Barz in view of Mori, and Corda appear to be silence however Takatsuka-2 teaches wherein the multiple processing servers with different access rights to the encrypted image content of the encoded video stream comprise different keys to decrypt a respective level of encrypted confidential content in the encoded video stream. (See Takatsuka-2 par.0342-0343: “The imaging apparatus 1 calculates a hash value of image data acquired by decrypting the encrypted image data Gc with the first cipher key (hereinafter denoted as a “hash value H1”), as a hash value to be transmitted to the reception apparatus 3 (processing servers) of level 1. The imaging apparatus 1 transmits, to the reception apparatus 3 of level 1, not only a value acquired by encrypting the hash value H1 calculated in such a manner with a secret key as a signature generation key (generated on the basis of a photoelectric random number, similarly to the first embodiment) but also a public key as a signature verification key generated on the basis of the secret key, together with the encrypted image data Gc… the imaging apparatus 1 calculates a hash value of image data acquired by decrypting the encrypted image data Gc with a combined key of the first and second cipher keys (hereinafter denoted as a “hash value H2”), as a hash value to be transmitted to the reception apparatus 3 of level 2. Then, the imaging apparatus 1 transmits, to the reception apparatus 3 of level 2, a value acquired by encrypting the hash value H2 calculated in such a manner with a secret key as a signature generation key and a public key as a signature verification key, together with the encrypted image data Gc.”, par.0345-0346: “The reception apparatus 3 of level 1 decrypts the encrypted image data Gc received from the imaging apparatus 1 with the first cipher key possessed by itself and calculates a hash value of the decrypted image data (hereinafter denoted as a “hash value H1a”). Also, the reception apparatus 3 of level 1 decrypts, with the public key, the hash value H1 received from the imaging apparatus 1 and encrypted with the secret key. This value acquired by the decryption will be denoted as a hash value H1b. Then, the reception apparatus 3 of level 1 determines whether or not the hash value H1a and the hash value H1b match… The reception apparatus 3 of level 2 decrypts the encrypted image data Gc received from the imaging apparatus 1 with the combined key of the first and second cipher keys possessed by itself and calculates a hash value of the decrypted image data (hereinafter denoted as a “hash value H2a”). Further, the reception apparatus 3 of level 2 decrypts, with the public key, the hash value H2 received from the imaging apparatus 1 and encrypted with the secret key. Letting the value acquired by this decryption be denoted as a hash value H2b, the reception apparatus 3 of level 2 determines whether or not the hash value H2a and the hash value H2b match.”). Therefore, it would have been obvious to one or ordinary skill in the art before the effective filing date of the claim invention to combine the teaching of claim 4 with Takatsuka-2 teaching “On the side of the reception apparatus 3 of each level, as illustrated in FIG. 28, the photoelectric random number encrypted with the public key is decrypted with the secret key of the level possessed by the reception apparatus 3 itself. Then, the reception apparatus 3 of each level calculates a hash value from the decrypted photoelectric random number and transmits the hash value to the imaging apparatus 1.”, (see Takatsuka-2 par.0366). Regarding claim 15 is a computer system claim that recites similar limitations as the computer-implemented method claim 6 and is rejected based on the same rational as claim 6. Claims 7, 8 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Le Barz et al. (US-20110075842-A1 hereafter Le Barz), in view of Mori et al. (US-20110222687-A1 hereafter Mori), in view of Bloom et al. (US-20180336463-A1 hereafter Bloom). Regarding claim 7 Le Barz in view of Mori disclose the computer-implemented method of claim 1, Le Barz in view of Mori appear to be silence However Bloom teaches wherein the edge device comprises one or more encode layers of an autoencoder, the encoding, by the edge device, being performed via the one or more encode layers of the autoencoder to provide a single encoded video stream from the encrypted image content and the non-encrypted image content. (See Bloom par.0041-0042: “method 300 may be performed by a hardware processor, such as a central processing unit (CPU) or graphics processing unit (GPU), of a computing device,(edge device) such as a desktop, laptop, server, cluster, or the like. method 300 begins at operation 302, with a domain autoencoder machine learning model (hereafter, domain autoencoder model) being built at a model machine (e.g., a model-generation computing device). The model machine may be one that is responsible for one or more of creating, training, or managing a model. The domain autoencoder model may be built using an autoencoder on a corpus of data, which may comprise images, text, tabular, video, or time-series data.”, par.0044: “a remote machine (e.g., an edge computing device) is provided with an encoding component of the domain autoencoder model, where the providing may comprise sending the encoding component to the remote machine, for example over a communications network. The encoding component may be provided in response to a request, from the remote machine, for the encoding component. The encoding component may be encrypted, for example by the model machine, using a machine-specific encryption key before being sent to the remote machine, and the encoding component may be sent with metadata relating to the encoding component. At operation 322, domain data at the remote machine, such as an MRI image, is encoded using the encoding component provided at operation 320, which results in encoded data. Additionally, the encoded domain data on the remote machine may comprise a compressed version of the original domain data.”, par.0059: “the first ML model component comprises a set of initial layers of the neural network, and the second ML model component comprises a set of final layers of the neural network. In this case, where the first ML model component is received by a remote machine, the remote machine can use the first ML model component to generate, based on input data, intermediate neural network output data that the remote machine sends to another machine that is using the second ML model component.”, Examiner interpret that the encoded image or video by the autoencoder to produce a compress and encode data which is consistent with applicant instant application par.0060 “the autoencoder is an unsupervised artificial neural network capable of learning how to efficiently compress and encode data, as well as how to reconstruct the data back from the reduced encoded representation.”). Therefore, it would have been obvious to one or ordinary skill in the art before the effective filing date of the claim invention to combine the teaching of claim 1 with Bloom teaching “the domain-specific techniques for obscuring data, domain-specific data encoding and decoding that is learned during the course of an ML training process can be used to obfuscate, and further to compress, data, which may then be transported over a communications channel (e.g., over a communications network). Through some embodiments, a computing device can send (e.g., from a remote, edge computing device to a server computing device) data encoded by a trained domain-specific encoder such that if the encoded data were intercepted, the original data could not be reproduced without a trained domain-specific decoder. In this way, the encoded data produced by the trained domain-specific encoder can be obfuscated, and remain that way until it is decoded by the trained domain-specific decoder. Additionally, the encoded data produced by the trained domain-specific encoder may be compressed in comparison to the original data, which can also be beneficial for data transport.”, (see Bloom par.0018). Regarding claim 16 is a computer system claim that recites similar limitations as the computer-implemented method claim 7 and is rejected based on the same rational as claim 7. Regarding claim 8 Le Barz in view of Mori, and Bloom disclose the computer-implemented method of claim 7, Bloom further teaches wherein a processing server of the one or more processing servers includes one or more decode layers of the autoencoder to facilitate reconstructing at least the non-encrypted image content from the single encoded video stream. (See Bloom par.0045: “where the remote machine sends the encoded data to the inference machine (processing server) that received the decoding component at operation 310. The encoded data may be encrypted, such as by a machine-specific encryption key, before being sent to the inference machine. Additionally, the encoded data may be sent with metadata relating to the encoded data, such as information regarding the encoding component that generated the encoded data. The method 300 continues with operation 314, where the inference machine uses the decoding component, obtained at operation 310, on the encoded data to make a prediction or inference. The method 300 continues with operation 316, where the inference machine provides (e.g., sends) the prediction or inference back to the remote machine.”, par.0059: “the first ML model component comprises a set of initial layers of the neural network, and the second ML model component comprises a set of final layers of the neural network. In this case, where the first ML model component is received by a remote machine, the remote machine can use the first ML model component to generate, based on input data, intermediate neural network output data that the remote machine sends to another machine that is using the second ML model component.” Examiner interpret that the inference machine (processing server) is provided with a decoding component of the domain autoencoder to facilitate reconstructing the image content from the encoded image or video, which is consistent with applicant instant application par.0060 “the autoencoder is an unsupervised artificial neural network capable of learning how to efficiently compress and encode data, as well as how to reconstruct the data back from the reduced encoded representation.”). Therefore, it would have been obvious to one or ordinary skill in the art before the effective filing date of the claim invention to combine the teaching of claim 7 with Bloom teaching “the domain-specific techniques for obscuring data, domain-specific data encoding and decoding that is learned during the course of an ML training process can be used to obfuscate, and further to compress, data, which may then be transported over a communications channel (e.g., over a communications network). Through some embodiments, a computing device can send (e.g., from a remote, edge computing device to a server computing device) data encoded by a trained domain-specific encoder such that if the encoded data were intercepted, the original data could not be reproduced without a trained domain-specific decoder. In this way, the encoded data produced by the trained domain-specific encoder can be obfuscated, and remain that way until it is decoded by the trained domain-specific decoder. Additionally, the encoded data produced by the trained domain-specific encoder may be compressed in comparison to the original data, which can also be beneficial for data transport.”, (see Bloom par.0018). Claims 9 is rejected under 35 U.S.C. 103 as being unpatentable over Le Barz et al. (US-20110075842-A1 hereafter Le Barz), in view of Mori et al. (US-20110222687-A1 hereafter Mori), in further view of Ghafourifar et al. (US-20180189505-A1 hereafter Ghafourifar). Regarding claim 9 Le Barz in view of Mori disclose the computer-implemented method of claim 1, Le Barz in view of Mori appear to be silence however Ghafourifar teaches wherein the encoding comprising using, by the edge device, lossy video compression to encode the encrypted image content and the non-encrypted image content into the encoded video stream. (See Ghafourifar par.0038: “a system (edge device) for providing Adaptive Privacy Controls (APC), file-level access permission setting may be implemented. For example, in one scenario, a user may wish to share a file of a lossy file type (e.g., a JPEG image that incorporates lossy compression) with a first colleague, but not allow that information to be visible to other colleagues who may receive the lossy file from the first colleague. The first colleague may be an ‘on-system’ recipient or an ‘off-system’ recipient. In such a scenario, User A may use the access permission setting system to send an obfuscated lossy file (e.g., by attaching the file to a MIME format email and sending using SMTP) to the first colleague, User B, while selecting the appropriate encryption attributes in the original lossy file to limit the visibility of User B (and other users who may view the container file) to only specific portions of the file's content. In one embodiment, User A may create an edited copy of the original file, referred to herein as an “obfuscated” lossy file… The client application or server (depending on the system architecture) may then “hide” (and optionally encrypt) the “true” copy of the obfuscated content within a part of the data structure of the lossy file. If encryption is desired, any compatible encryption process may be used, e.g., a public/private key process, with the specific public key being provided by the device, the recipient user, or another central authority to create an encrypted lossy file. User B can then receive a typical message with the lossy file attached, which includes the hidden (and optionally encrypted) true copy of the obfuscated portions of the file. In some embodiments, a part of the data structure of the lossy file may also include a deep-link for validating the receiving user's credentials, as well information for creating a so-called “phantom user identifier,” e.g., a temporary authorized identifier that may be used by an ‘off-system’ user to authenticate himself or herself for the purposes of viewing a particular piece(s) of protected content. The deep-link may be used to validate user credentials, as well as to view the hidden (and/or encrypted) obfuscated contents of the file in a compatible authorized viewer application.”, par.0040: “wherein sub-document-level access permission setting may be employed in the sharing of files of lossy file types, e.g., pictures, video, or other media content compressed using lossy compression, is the situation whereby specific portions of the media content require selective censorship, redaction, or other protection for certain recipients, so as to maintain desired privacy or security levels on a per-recipient level.”). Therefore, it would have been obvious to one or ordinary skill in the art before the effective filing date of the claim invention to combine the teaching of claim 1 with Ghafourifar teaching “when viewed outside of an authorized viewing application (or by a recipient that is not authorized to decrypt any of the protection regions), the viewer will simply see the original encoded media file, which will have been compressed with the protected regions being obfuscated. This process may thus allow for the reconstruction of the original content in a secure fashion that enforces the sender's original recipient-specific privacy intentions for the various regions of the lossy file, while still allowing other unauthorized-recipients to view the redacted version of the file in standard (i.e., ‘off-system’) viewing applications for the particular lossy file type.”, (see Ghafourifar par.0026-0027). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Wang et al. (US-20210367759-A1) the image processing device includes a receiving circuit, a decoding circuit, and an output circuit. The receiving circuit is configured to receive at least one input video bitstream to obtain a first encoded bitstream and a second encoded bitstream. The decoding circuit is configured to decode the first encoded bitstream to generate a protected video frame including image data of at least one private area. The output circuit is configured to output the protected video frame to a display queue such that the at least one private area is displayed. Ayers et al. (US-20180091856-A1) Video of the environment is processed in accordance with an entity recognition process to identify the presence of at least part of an entity in the environment. It is determined if the identified entity is to be censored based on based on entity information of a received VLC signal. Based on the identified entity being determined to be censored, the video recording is modified to replace at least a portion of the identified entity with a graphical element adapted to obscure the portion of the identified entity in the video stream. By modifying a video stream to obscure an entity, protected content in the environment may be prevented from being displayed to a viewer of the video recording. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DUILIO MUNGUIA whose telephone number is (571)270-5277. The examiner can normally be reached M-F 9:30 - 5:00Pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Eleni A Shiferaw can be reached at (571) 272-3867. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DUILIO MUNGUIA/Examiner, Art Unit 2497 /ALI H. CHEEMA/Primary Examiner, Art Unit 2497
Read full office action

Prosecution Timeline

Oct 27, 2022
Application Filed
Oct 19, 2023
Response after Non-Final Action
Dec 11, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12470541
IMAGE FORMING APPARATUS, DISPLAY METHOD, AND RECORDING MEDIUM FOR DISPLAYING AUTHENTICATION METHOD USING EXTERNAL SERVER OR UNIQUE TO IMAGE FORMING APPARATUS
2y 5m to grant Granted Nov 11, 2025
Study what changed to get past this examiner. Based on 1 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
100%
Grant Probability
99%
With Interview (+0.0%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 5 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month