Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments filed 17 October 2025, with respect to the rejection of claims 1-20 under 35 U.S.C. § 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground of rejection is made in view of Tan et al. as modified by Chu et al.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 7-8, 14-15, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Tan et al. (“CANOPIC: Pre-digital privacy-enhancing encodings for computer vision”, hereinafter “Tan”) in view of Chu et al. (“Real-Time Privacy-Preserving Moving Object Detection in the Cloud”, hereinafter Chu).
Regarding claim 1, Tan discloses a method for image privacy protection, comprising:
distorting a captured analog image using a transform filter (pgs. 2-3, section 3, “CAnOPIC design, consisting of a series of computations easily implementable in either the optical or analog domains” for analog image disclosure; section 3.1, “we design our CAnOPIC to be a series of three local operations: 2D convolution, max-pooling, and quantization”; and, for the explicit distortion rationale, pg. 2 para. 3 “destroying face identities in images while preserving information necessary for face detection”);
digitizing the distorted analog image (pg. 3, section 3.1, “quantization is naturally performed by the analog-digital converter”, wherein the quantization is performed post-analog image distortion; and fig. 2 for the diagram of the CAnOPIC method, wherein the post-distortion digitization is clearly disclosed); and
analyzing the distorted, digitized image using a trained machine learning process to identify at least one of an individual or an object in the distorted, digitized image, the machine learning process having been trained to identify individuals and objects in the distorted image (pgs. 3-4, section 3.2, wherein the machine learning process is an alternating optimization process for parameter learning comprising a pair of neural networks in series with the analog distortion mechanism; explicitly in pg. 3: “Recognition NN: a neural network (NN) trained to classify face identities, (3) Detection NN: a NN trained for the binary classification of whether an image contains a face or not”). Specifically, Tan discloses CAnOPIC, a privacy-enhancing encoding for face detection and identification within distorted image. The CAnOPIC system uses optical and analog preprocessing to distort images prior to digitization, enabling upstream destruction of identifiable characteristics of imaging and increasing robustness to adversarial access attempts, while also training a neural network series to be able to detect the presence and identity of a face within the distorted image.
Tan does not disclose wherein, upon identification of at least one of an individual or an object in the distorted, digitized image for which an action is to be taken, communicating an indication to at least one device to cause the at least one device to perform a predetermined action.
However, Chu discloses wherein, upon identification of at least one of an individual or an object in a distorted, digitized image for which an action is to be taken, communicating an indication to at least one device to cause the at least one device to perform a predetermined action (pg. 599, section 2.4, wherein an alarm is received by a client depending on the detection of activity by an individual or object within the encrypted video, and wherein the alarm and the prompt for potential decryption constitute the communication and predetermined action). Specifically, Chu is a method of privacy-preserving object motion detection performed in the cloud by recording and encrypting video from a camera, transmitting it to a server, and performing motion segmentation on the raw frames.
Therefore, both Tan and Chu disclose methods and systems of image distortion, digitization, and subsequent detection of objects within the frame using a learning-based model. Thus, it would have been obvious to one having ordinary skill in the art prior to the effective filing date to have utilized the individual identification-determined predetermined action execution of Chu within the overall privacy-enhancing encodings method of Tan as the application of a known technique to a known device ready for improvement to yield the predictable improvement of a privacy-preserving detection-based response mechanism utilizing analog image filters and convolutions for further privacy robustness. More specifically, Tan discloses that the CAnOPIC method is directed to use within the field of “destroying face identities in images while preserving information necessary for face detection…for obtaining crowd statistics, such as foot traffic and occupancy, without compromising the identity (and thus, location) of any specific individual especially in sensitive areas such as military bases or medical settings” (Tan, pg. 2 para. 3). The disclosure of Chu may, therefore, be applied here, wherein the detection of an individual in motion may trigger a predetermined action for altering crowd statistics, in an example application.
Claims 8 and 15 are rejected, mutatis mutandis, for reasons similar to claim 1. Regarding claim 8, the ordinarily skilled artisan would understand that execution of the method and system of Tan as modified by Chu would need to be stored on a memory accessible to a processor in order to be executed, and would need to be executed on a processor (Abstract, “standard pipeline for many vision tasks uses a conventional camera to capture an image that is then passed to a digital processor for information extraction”).
The rationale of claim 8 directed to the processor and memory accessible to the processor is further applied to the recitation of both within the control unit of claim 15.
Additionally, Tan discloses an image capture device, comprising an imager (Abstract, “we propose an optical and analog system that preprocesses the light from the scene before it reaches the digital imager” wherein the imager disclosed within this line refers to a digital image processor for image processing tasks, and the captured images are analog images, necessarily captured by an image); a transform filter (Section 3.1, “Optically, 2D convolutions can be performed via diffractive masks…Max-pooling can be performed via analog comparators”, both of which constitute transform filters); and an analog to digital converter (Section 3.1, “and quantization is naturally performed by the analog-digital converter”; and Fig. 2 element “Quantization”).
Regarding claims 7, 14, and 20, Tan in view of Chu discloses all limitations of claims 1, 8, and 15. Tan does not disclose wherein the machine learning process is trained to inverse-transform the distorted image and to identify individuals and objects in the inverse-transformed image.
However, Chu discloses wherein the machine learning process is trained to inverse-transform the distorted image and to identify individuals and objects in the inverse-transformed image (pg. 599, section 2.4, wherein inverse transform operations on the encrypted image can be performed if motion activity is detected and an alarm is received).
Thus, it would have been obvious to one having ordinary skill in the art prior to the effective filing date of the claimed invention to have integrated the inverse transform identification method of Chu within the method and system of Tan according to the rationale of claim 1.
Claims 2, 9, and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Tan in view of Chu and in further view of López-Garcia et al. (“Efficient ordering of the Hadamard basis for single pixel imaging”, hereinafter “López-Garcia”).
Regarding clams 2, 9, and 16, Tan in view of Chu discloses all limitations of claims 1, 8, and 15, respectively. Tan in view of Chu does not disclose wherein the transform filter comprises at least one of a Walsh-Hadamard transform or a Fourier transform.
However, López-Garcia discloses wherein a transform filter comprises a Walsh-Hadamard transform (pgs. 3-4, section 2, detailing Hadamard single pixel imaging and image deconstruction and reconstruction). Specifically, López-Garcia discloses Hadamard Single-Pixel imaging, an indirect imaging modality wherein a single pixel detector captures transmitted or reflected light intensity of an object with an illumination pattern to reconstruct the object image. The Hadamard imaging method of López-Garcia is able to achieve high-quality images in a noisy environment, while providing a mechanism for distortion which may serve as an alternative for the convolution kernel disclosure of Tan as modified by Chu. Thus, it would have been obvious to one having ordinary skill in the art prior to the effective filing date of the claimed invention to have utilized the Hadamard Single Pixel imaging method of López-Garcia within the method of Tan as modified by Chu as a simple substitution of known elements to yield predictable results; specifically, substitution of the convolution kernel of Tan modified by Chu with the Hadamard transform method of López-Garcia would still predictably result in a distortion prior to digitization which would enable enhanced privacy and image inscrutability prior to conversion.
Claims 3-6, 10-13, and 17-19 are rejected under 35 U.S.C. 103 as being unpatentable over Tan in view of Chu and in further view of Shen et al. ("Smart lighting control system based on fusion of monocular depth estimation and multi-object detection", hereinafter “Shen”).
Regarding claims 3, 10, and 17, Tan and Chu disclose all limitations of claims 1, 8, and 15, respectively.
Despite indications within Tan (pg. 2, section 1, “military bases or medical settings”) and Chu (Fig. 2 is a residential location), of the environment to which the method of Tan modified by Chu is to be incorporated within, Tan and Chu do not explicitly disclose wherein the analog image is captured using an image capture device located in at least one of a residential, a commercial, or an industrial environment.
However, Shen discloses wherein the analog image is captured using an image capture device located in a commercial area (pg. 4, wherein images were taken within offices, classrooms, and general locations within Shandong Normal University). Specifically, Shen discloses a smart lighting system based on object detection and monocular depth estimation. Therefore, Shen discloses a potential application environment for the method of Tan and Chu, wherein a camera with privacy preservation and encryption detects objects in an environment and applies changes to the environment based on the presence or absence of a particular individual or an object. Thus, it would have been obvious to one having ordinary skill in the art prior to the effective filing date of the claimed invention to view the method of image privacy protection and actionable response method of Tan and Chu as a known method, which could be utilized within the method of Shen as a variation in a different field based on the design incentive of privacy for individuals present within the image of Shen.
Regarding claims 4, 11, and 18, Tan in view of Chu and in further view of Shen discloses all limitations of claims 3, 10, and 17, respectively. Shen further discloses wherein the at least one device is located in the commercial environment and the predetermined action performed by the at least one device causes a change to the commercial environment (pg. 5 para. 1 - pg. 8 para 4, wherein the device is located within different rooms within the university after training, the predetermined action performed is the light turning on, and the change to the commercial environment is the light turning on or off depending on the detection of the individual). Thus, it would have been obvious to one having ordinary skill in the art prior to the effective filing date of the claimed invention to combine the disclosures of Tan as modified by Chu and Shen according to the rationale of claim 3.
Regarding claims 5, 12, and 19, Tan in view of Chu and in further view of Shen discloses all limitations of claims 1, 8, and 15, respectively. Chu further discloses identifying at least one individual or object within the distorted, digitized image for which action is to be taken (pg. 599, section 2.3, wherein the image identification of the object or individual is accomplished as a result of Gaussian operations). Thus, it would have been obvious to one having ordinary skill in the art prior to the effective filing date of the claimed invention to have integrated the object detection and action determination step of Chu within the method of Tan as modified by Chu according to the rationale of claim 1.
Tan in view of Chu does not disclose wherein the status of an object is to be determined.
However, Shen discloses determining a status of the at least one individual or object identified in an image (page 12, wherein the detection of different motion, states and positions was accomplished to determine presence/absence of the individual for the light state). Thus, it would have been obvious for one having ordinary skill in the art prior to the effective filing date of the claimed invention to combine the disclosures of Tan and Chu and Shen according to the rationale of claim 3.
Regarding claims 6 and 13, Tan in view of Chu and in further view of Shen discloses all limitations of claims 5 and 12, respectively. Chu further discloses identifying at least one individual or object within the distorted, digitized image for which action is to be taken (pg. 599, section 2.3, wherein the image identification of the object or individual is accomplished as a result of Gaussian operations). Thus, it would have been obvious to one having ordinary skill in the art prior to the effective filing date of the claimed invention to have integrated the object detection and action determination step of Chu within the method of Tan as modified by Chu according to the rationale of claim 1.
Tan in view of Chu does not disclose wherein a predetermined action to be taken by the device is dependent on the determined status of the at least one individual or object identified in the image for which action is to be taken.
However, Shen discloses wherein a predetermined action to be taken by the device is dependent on the determined status of the at least one individual or object identified in the image for which action is to be taken (Abstract, wherein a smart lighting system is disclosed, which operates on detected positions of individuals in frame, and wherein the lights automatically switch on and off depending on distance and depth in camera frame). Thus, it would have been obvious for one having ordinary skill in the art prior to the effective filing date of the claimed invention to combine the disclosures of Tan as modified by Chu and Shen according to the rationale of claim 3.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ROHAN TEJAS MUKUNDHAN whose telephone number is (571)272-2368. The examiner can normally be reached Monday - Friday 9AM - 6PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Gregory Morse can be reached at 5712723838. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ROHAN TEJAS MUKUNDHAN/Examiner, Art Unit 2663
/GREGORY A MORSE/Supervisory Patent Examiner, Art Unit 2698