DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 10/27/2025 has been entered.
Claim Status
Claims 1-2, 4-7, 10-12, and 14-17 are pending in this Office Action.
Claims 1-2, 4, 6-7, 10-12, 14, and 16-17 are amended.
Claims 3, 8-9, 13, and 18-19 are cancelled.
Response to Arguments
Applicant’s arguments with respect to claims 1, 10, and 11 have been fully considered. The argument regarding preserving the new set of predefined pixels is moot in view of the new ground(s) of rejection. The other arguments with respect to Kundu are not persuasive.
Applicant argues Kundu does not disclose or suggest "replac[ing], from the section of interest, the one or more features of the user with a new set of predefined pixels to obtain a new section of interest, wherein the new set of predefined pixels corresponds to a set of pixels of a second class different from the first class," and "apply[ing], a false color conversion to the obtained new section of interest; blur[ring], the false color converted new section of interest to form a processed image that is added to one or more output frames,".
The examiner respectfully disagrees. Kundu teaches the processing resource 140 selectively applies one or more transforms to the region such that specific identity information associated with region of interest 220-1, such as the facial features, has been removed and replaced with other pixels. For example, pixels of the facial features of the user may be replaced using a dolor dilation that replaces the pixels with another type of pixel, such as a max color pixel (par. 62, 75, and 88, Fig. 5), which demonstrates replacing, from the section of interest, the one or more features of the user with a new set of predefined pixels to obtain a new section of interest, wherein the new set of predefined pixels corresponds to a set of pixels of a second class different from the first class. Kundu further teaches multiple transforms may be applied, for example two or more transforms may be applied in series (par. 130-131). This may include an edge image transform, such as Laplacian transform used to preserve edges of objects, and provides an effect of false-color edges (par. 91-93 and 100, Fig. 7 and 8), which demonstrates applying, a false color conversion to the obtained new section of interest. Kundu further teaches another transform that may be applied is a coarse blur "mosaic" or local blurring transform to produce the transformed image data 150 which is included in the frames of video for output (par. 57, 103, 107-109, and 131, Fig. 1, 9, and 11), which demonstrates blurring, the false color converted new section of interest to form a processed image that is added to one or more output frames.
Applicant further argues Kundu does not teach the claimed feature of "extract[ing] a set of information from the sequence of frames pertaining to sensitive information associated with a user," is performed before "replac[ing], from the section of interest, the one or more features of the user with a new set of predefined pixels to obtain a new section of interest, wherein the new set of predefined pixels corresponds to a set of pixels of a second class different from the first class."
The examiner respectfully disagrees. Kundu teaches extracting sensitive information from the video or image data (par. 26), such as by analyzing the original image data 125 to detect presence of an object of interest such as one or more faces (par 63, Fig. 1), credit card numbers, or other personal information (par. 55, Fig. 3). This demonstrates extracting a set of information from the sequence of frames pertaining to sensitive information associated with a user. Kundu further teaches the analysis of the original image data 125 to determine the region of interest including the sensitive information, such as the person’s face, is performed before selectively applying the one or more transforms to the region of interest including the person’s face (par. 63, Fig. 2 and 3).
Applicant further argues Kundu fails to disclose "preserving, the extracted set of information [obtained before replacement of the section of interest], the section of interest, and the new set of predefined pixels," and "provid[ing], the preserved information and the one or more output frames to output layer processing,".
As mentioned above, the examiner agrees Kundu does not teach preserving the new set of predefined pixels, which is moot in view of the new grounds of rejection. However, Kundu does teach preserving the video including the section of interest and sensitive information by encrypting either the entire video, the sensitive information portion, or difference information, such that the viewer having a decryption key can view the original video (par. 26-28 and 143-148), which demonstrates preserving, the extracted set of information and the section of interest. Kundu further teaches providing the modified video data and the encrypted data to a video processor to embed the encrypted data back into the modified video (par. 27, 143-148, and 157), which demonstrates providing, the preserved information and the one or more output frames to output layer processing.
Applicant’s arguments with respect to claims 2 and 12 have been considered, but are moot in view of the new ground(s) of rejection.
Claim Objections
Claims 2 and 12 are objected to because of the following informalities:
Claims 2 and 12 recite “the plurality of pixels of the first class are obtained by calculating gradient of the plurality of pixels of the first class using prediction probabilities”. The examiner recommends to replace this with “the plurality of pixels of the first class are obtained by calculating a gradient of pixels of the section of interest using prediction probabilities” to improve clarity of the claim in accordance with par. 58 of the specification.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 4, 6-7, 10-11, 14, and 16-17 are rejected under 35 U.S.C. 103 as being unpatentable over Kundu et al. (US 2013/0004090) in view of Amengual Galdon et al. (US 2018/0288363).
Regarding claims 1, 10, and 11, Kundu teaches: A system, a user equipment (UE), and a method for preserving privacy for a set of data packets [the system 100 prevents access to private information contained in the image data (abstract, Fig. 1). The system could be a user equipment, such as a laptop (par. 30 and 169)], said system comprising:
a video analytics and privacy preserving module (VA-PP) module, said VA-PP module comprising one or more processors [processing resource 140 analyzes the original video image data 125 and safeguards personal or private information and includes a processor 813 (par. 50, 65, and 71, Fig. 1 and 15)],
said one or more processors operatively coupled to one or more first computing devices associated with one or more users [the processing resource 140 communicates with a surveillance system 120 and equipment at remote location 195 so that one or more users can view a rendition of the image 150 (par. 21, 54, and 164, Fig. 1 and 15)],
wherein the one or more processors executes a set of executable instructions that are stored in a memory, upon execution of which, the one or more processor causes the VA- PP module [one or more processors carry out instructions stored on computer readable storage media (par. 32 and 166, Fig. 15)] to:
receive, a first set of data from the one or more first computing device, the first set of data pertaining to a video stream [Processing resource 140 receives the original image data 125 produced by the surveillance system 120. The image data 125 being a video stream (par. 50, 52, and 71, Fig. 1)]
decode, from the first set of data, a sequence of frames [determining from the original image data, a sequence of still images (e.g., frames) (par. 57)]
extract a set of information from the sequence of frames pertaining to sensitive information associated with a user [extracting sensitive information from the video or image data, such as by analyzing the original image data 125 to detect presence of an object of interest such as one or more faces, credit card numbers, or other personal information (par. 26, 55, 63, Fig. 1 and 3)]
based on the set of information extracted, obtain, by an inference module, a section of interest from the sequence of frames, wherein the section of interest pertains to a plurality of pixels of a first class [For each frame of video including the object of interest, the processing resource 140, uses an algorithm, such as a face recognition algorithm, to identify one or more regions of interest in the frame, such as region 310-1. The region of interest pertains to pixels of a class used to render the object of interest, such as the face (par. 61 and 63, Fig. 3)]
identify, from the section of interest one or more features of the user associated with the sensitive information [identify within the region of interest, the pixels that are used to render the object of interest, such as one or more faces of the person, including facial features (par. 56 and 62-64, Fig. 2-3)]
replace, from the section of interest, the one or more features of the user with a new set of predefined pixels to obtain a new section of interest, wherein the new set of predefined pixels corresponds to a set of pixels of a second class different from the first class [the processing resource 140 selectively applies one or more transforms to the region such that specific identity information associated with region of interest 220-1, such as the facial features, has been removed and replaced with other pixels. For example, pixels of the facial features of the user may be replaced using a dolor dilation that replaces the pixels with another type of pixel, such as a max color pixel (par. 62, 75, and 88, Fig. 5)]
apply, a false color conversion to the obtained new section of interest [multiple transforms may be applied, for example two or more transforms may be applied in series (par. 130-131). This may include an edge image transform, such as Laplacian transform used to preserve edges of objects, and provides an effect of false-color edges (par. 91-93 and 100, Fig. 7 and 8)]
blur, the false color converted new section of interest to form a processed image that is added to one or more output frames [another transform that may be applied is a coarse blur "mosaic" or local blurring transform to produce the transformed image data 150 which is included in the frames of video for output (par. 57, 103, 107-109, and 131, Fig. 1, 9, and 11)]
preserve, the extracted set of information and the section of interest [preserving the video including the section of interest and sensitive information by encrypting the entire video, the sensitive information portion, or difference information, such that the viewer having a decryption key can view the original video (par. 26-28 and 143-148)]
provide, the preserved information and the one or more output frames to output layer processing [providing the modified video data and the encrypted data to a video processor to embed the encrypted data back into the modified video (par. 27, 143-148, and 157)] and
provide flexibility to modify a set of preference parameters associated with the new set of predefined pixels as per requirements at any stage [the process of transforming can include modifying settings of picture elements in the original image data 125 to produce transformed image data 150 (par. 149). Providing flexibility to modify which transforms are performed or at what strengths as necessary (par. 63, 88, 135, 139, and 155)].
Kundu does not explicitly disclose: the first set of data are packets of data; and preserve the new set of predefined pixels.
Amengual Galdon teaches: the first set of data are packets of data [packets of data (par. 2 and 118)] and
preserve the new set of predefined pixels [a transformation rule for pixel mapping is created to translate from one or more source pixels to one or more target pixels and the transformation rule is stored 216 in a cache 208 (par. 89 and 99, Fig. 2)].
It would have been obvious to one of ordinary skill in the art, having the teachings of Kundu and Amengual Galdon before the effective filing date of the claimed invention to modify the system of Kundu by incorporating the first set of data are packets of data and preserving the new set of predefined pixels as disclosed by Amengual Galdon. The motivation for doing so would have been to be able to receive the video in a stream (Amengual Galdon – par. 7) and to use the pixel mapping for use in future transformation tasks (par. 99). Therefore, it would have been obvious to combine the teachings of Kundu and Amengual Galdon to obtain the invention as specified in the instant claim.
Regarding claims 4 and 14, Kundu and Amengual Galdon teach the system of claim 1; Kundu further teaches: the system is configured to blur the false color converted new section of interest using a blur filter [multiple transforms may be applied, for example two or more transforms may be applied in series (par. 130-131). Providing an effect of false-color edges (par. 91-93 and 100, Fig. 7 and 8). Blurring the image using a coarse blur mosaic or blur transform (par. 101-103 and 107-109, Fig. 9 and 11)].
Regarding claims 6 and 16, Kundu and Amengual Galdon teach the system of claim 1; Kundu further teaches: the system is configured to post process and encode to convert one or more output frames into an output video stream [the processing resource 140 can process the transformed video by generating a first data stream and compress the first data stream using an encoding algorithm (par. 154 and 158)].
Regarding claims 7 and 17, Kundu and Amengual Galdon teach the system of claim 1; Amengual Galdon further teaches: the VA-PP module comprises one or more processed buffers configured to fine-tune the one or more output frames [buffers may be used, such as to store the segments of video in a sequential buffer (par. 129 and 131)] and
Kundu further teaches: integrate privacy preserving techniques to preserve the one or more features associated with the user pertaining to the sensitive information [processing resource 140 safeguards personal or private information by removing the personally identifiable features. The system may also preserve the features by encrypting and storing the personal information, and the encrypted information can be embedded into the video stream, so that authorized users can see the preserved original video (par. 50, 69, 71, and 143-148, Fig. 1)].
Claims 2 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Kundu et al. (US 2013/0004090) in view of Amengual Galdon et al. (US 2018/0288363) and further in view of Ma et al. (CN 104036255).
Regarding claims 2 and 12, Kundu and Amengual Galdon teach the system of claim 1; Kundu and Amengual Galdon do not explicitly disclose: the plurality of pixels of the first class are obtained by calculating gradient of the plurality of pixels of the first class using prediction probabilities.
Ma teaches: the plurality of pixels of the first class are obtained by calculating gradient of the plurality of pixels of the first class using prediction probabilities [Par. 42 appears to teach locating pixels of facial feature points by calculating a gradient of each pixel point of the face image based on probability density (par. 40-42)].
It would have been obvious to one of ordinary skill in the art, having the teachings of Kundu, Amengual Galdon, and Ma before the effective filing date of the claimed invention to modify the system of Kundu and Amengual Galdon by incorporating the plurality of pixels of the first class are obtained by calculating gradient of the plurality of pixels of the first class using prediction probabilities as disclosed by Ma. The motivation for doing so would have been to reduce the calculation complexity of locating the pixel points of the face image (Ma – par. 42). Therefore, it would have been obvious to combine the teachings of Kundu and Amengual Galdon with Ma to obtain the invention as specified in the instant claim.
Claims 5 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Kundu et al. (US 2013/0004090) in view of Amengual Galdon et al. (US 2018/0288363) and further in view of Shu et al. (US 2015/0030247).
Regarding claims 5 and 15, Kundu and Amengual Galdon teach the system of claim 1; Kundu and Amengual Galdon do not explicitly disclose: the system is further configured to provide color conservation and rescaling of the sequence of frames per a sink pad configuration.
Shu teaches: the system is further configured to provide color conservation and rescaling of the sequence of frames per a sink pad configuration [a system that preserves fine color features and uses downscaling and upscaling of the image according to settings for the receiving images (par. 34). This may be done a sequence of images, such as a video (par. 38 and 41)].
It would have been obvious to one of ordinary skill in the art, having the teachings of Kundu, Amengual Galdon, and Shu before the effective filing date of the claimed invention to modify the system of Kundu and Amengual Galdon by incorporating the system is further configured to provide color conservation and rescaling of the sequence of frames per a sink pad configuration as disclosed by Shu. The motivation for doing so would have been to correct color artifacts in the images to improve the quality of the images (Shu – par. 6). Therefore, it would have been obvious to combine the teachings of Kundu and Amengual Galdon with Shu to obtain the invention as specified in the instant claim.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Alexander Boyd whose telephone number is (571)270-0676. The examiner can normally be reached Monday - Friday 9am-5pm PST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Benjamin Bruckart can be reached at 571-272-3982. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ALEXANDER BOYD/Examiner, Art Unit 2424