Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim status: claims 1-11 are pending in this Office Action.
DETAILED ACTION
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim 11 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter.
The claim is drawn to a “computer readable storage medium”. The specification is silent regarding the meaning of a “computer readable storage medium”. This can include signal per se. Thus, applying the broadest reasonable interpretation in light of the specification and taking into account the meaning of the words in their ordinary usage they would be understood by one of ordinary skill in the art (MPEP §2111), the claim as a whole covers both transitory and not transitory media. A transitory media does not fall into any of the 4 categories of invention (Process, Machine, Manufacture, or composition of matter). Therefore the claims are rejected under 35 U.S.C. 101 for being directed towards non-statutory subject matter. The applicant is respectfully suggested to amend the claims to overcome the 35 USC § 101 rejection.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1,148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under pre- AIA 35 U.S.C. 103(a) are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-2, 4-11 are rejected under 35 U.S.C. 103 as being unpatentable over Morzhakov (US20200349347A1), hereafter referred to Morzhakov, in view of Ren “Learning to Anonymize Faces for Privacy Preserving Action Detection”, hereafter referred to as Ren.
Regarding claim 1:
Morzhakov teaches A method for providing training data for a machine learning model for monitoring a person based on video data, the method being performed by a training data provider ([0096-0097] obtaining synthetic data for training a neural network … a selected human skeleton. The human skeleton may be extracted from a video stream using an available pose estimation software), the method comprising:
obtaining a data feed of the person, wherein the data feed comprises a series of images that preserves a privacy of the person ([0002] capturing video images and analyzing them in real time … in the recoded motion. [0006] receiving from a sensor an image of the person to be monitored [0005] remotely monitoring a person without displaying actual images of the person, thereby protecting the person's privacy. This is achieved, in part, by displaying stick figures that are associated with the person and that are derived from images of the person. [0045] There are a lot of privacy concerns and discomfort for people when they know that they are under video surveillance. To reduce this discomfort we created a way to hide sensitive data about peoples' identities. This techniques helps us show important information about the people who are monitored, e.g., to check where somebody is or to check what that person is doing, but not to too much to become intrusive to the privacy of that person. [0046] With reference to FIG. 1, we use “skeleton view” or “stick-figures … A person indicated by the stick FIG. 116 is present);
generating fake video data of a fictive person ([0097] a selected human skeleton. The human skeleton may be extracted from a video stream using an available pose estimation software. The result of the selection of the skeleton is a set of “joints”—limbs that correspond to the human body. Second, we synthesize falls in the form of skeletons using known MotionCapture techniques. We collected 150 falls examples using MotionCapture, and also captured a set of situations/motions that are not falls. These 150+ examples were applied to the skeletons to simulate or synthesize several different falls, e.g., in different directions, of the person from the video images of whom the skeleton was obtained)
combining the data feed with the fake video data, resulting in training data ([0097] a selected human skeleton. The human skeleton may be extracted from a video stream … we synthesize falls in the form of skeletons using known MotionCapture techniques … these 150+ examples were applied to the skeletons to simulate or synthesize several different falls, e.g., in different directions, of the person from the video images of whom the skeleton was obtained. Note: applied is combining; The human skeleton may be extracted from a video stream is fake video data; and
providing the training data for training the machine learning model ([0098] For each such camera, animations of the person to be monitored falling were generated for different heights and widths of the skeleton. Simulated noise, e.g., in the form of missing/hidden joints, wrong position of joints, frame drops in the sequence of the video stream, etc., were added to the animations. A recurrent neural network was trained on the resulting skeleton base. [0010] provided for training sets of autoencoders. The method includes the step of: providing a number of stick figures corresponding to an image of a person [0024] Pose detection based on a deep-learning trained convolutional neural network, that takes into consideration consistency of input frames and possible changes in a monitored person pose due to the person's movements).
Morzhakov does not teach a face of the fake video data is a computer-generated fictive face.
Ren teaches generating fake video data of a fictive person, such that a face of the fake video data is a computer-generated fictive face (See fig. 1 where the fake face video of Alex is generated from Alex’s face video. The first half page “you do not want your personal assistant to record Alex's face, because you are concerned about his privacy information since the camera could potentially be hacked. Ideally, we would like a face anonymizer that can preserve Alex's privacy (i.e., make his face no longer recognizable as Alex) while at the same time unaltering his actions. In this paper, our goal is to create such a system. (Real experimental results.) sending potentially privacy-sensitive images/videos … techniques remove scene details in the images/videos in an attempt to protect privacy … modifies the face pixels in video frames to minimize face identification accuracy);
It would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to take the teachings of Morzhakov and apply them on the teachings of Ren to further implement generating fake video data of a fictive person, such that a face of the fake video data is a computer-generated fictive face. One would be motivated to do so because in order to improve better system and method to provide a face anonymizer that can preserve face privacy (Ren, Fig. 1 page 1).
Regarding claim 2:
Morzhakov teaches The method according to claim 1, wherein the data feed has been captured using a privacy preserving capturing device (See fig. 1. [0046] the room 102 has installed therein a sensor 104. The sensor 104 may include a camera, a stereo camera, a 3D camera, an infra-red camera. [0098] the allowable position of the camera (i.e., the relative position of a person and a camera)
Regarding claim 4:
Morzhakov teaches The method according to claim 2, wherein the privacy preserving capturing device is an infrared, IR, camera with a resolution that is low enough to not reveal a face of the person (see fig. 1. [0046] “skeleton view” or “stick-figures” … the room 102 has installed therein a sensor 104. The sensor 104 may include a camera, a stereo camera, a 3D camera, an infra-red camera … A person indicated by the stick FIG. 116 is present in a corner of the room 102. [0005] displaying stick figures that are associated with the person and that are derived from images of the person).
Regarding claim 5:
Morzhakov teaches The method according to claim 1,
Morzhakov does not teach wherein the step of generating fake video data comprises generating fake video data based on a generative adversarial network, GAN.
Ren teaches wherein the step of generating fake video data comprises generating fake video data based on a generative adversarial network, GAN (See fig. 1 where the fake face video of Alex is generated from Alex’s face video. The first half page “you do not want your personal assistant to record Alex's face, because you are concerned about his privacy information since the camera could potentially be hacked. Ideally, we would like a face anonymizer that can preserve Alex's privacy (i.e., make his face no longer recognizable as Alex) while at the same time unaltering his actions. In this paper, our goal is to create such a system. (Real experimental results.) sending potentially privacy-sensitive images/videos … techniques remove scene details in the images/videos in an attempt to protect privacy … modifies the face pixels in video frames to minimize face identification accuracy. The second half of page 2 “learning the video anonymizer. We use an adversarial training strategy … use a multi-task extension of the generative adversarial network (GAN) [11] formulation
It would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to take the teachings of Morzhakov and apply them on the teachings of Ren to further implement wherein the step of generating fake video data comprises generating fake video data based on a generative adversarial network, GAN. One would be motivated to do so because in order to improve better system and method to provide use a multi-task extension of the generative adversarial network (GAN) [11] formulation (Ren, page 2).
Regarding claim 6:
Morzhakov teaches A training data provider for providing training data for a machine learning model for monitoring a person based on video data, the training data provider comprising:
a processor ([0101 one or more processors, one or more memory); and
a memory storing instructions ([0101] one or more memory … instructions stored in a memory)
[Rejection rational for claim 1 is applicable].
Regarding claim 7:
[Rejection rational for claim 2 is applicable].
Regarding claim 9:
[Rejection rational for claim 4 is applicable].
Regarding claim 10:
[Rejection rational for claim 5 is applicable].
Regarding claim 11:
Morzhakov teaches A computer readable storage medium storing a computer program for providing training data for a machine learning model for monitoring a person based on video data, the computer program comprising computer program code which, when executed on a training data provider causes the training data provider to ([0101] instructions stored in a memory module and/or a storage device for execution thereof [0096-0097] obtaining synthetic data for training a neural network … a selected human skeleton. The human skeleton may be extracted from a video stream using an available pose estimation software)
[Rejection rational for claim 1 is applicable].
Claims 3 and 8 are rejected under 35 U.S.C. 103 as being unpatentable over Morzhakov (US20200349347A1), hereafter referred to Morzhakov, in view of Ren “Learning to Anonymize Faces for Privacy Preserving Action Detection”, hereafter referred to as Ren, further in view of Sprenger (US20150302207)
Regarding claim 3:
Morzhakov-Ren teaches The method according to claim 2,
Morzhakov-Ren does not teach wherein the privacy preserving capturing device is a radar
Sprenger teaches wherein the privacy preserving capturing device is a radar ([0071] a radar sensor to detect presence of at least one person within a detection zone about the system and to output a detection notification responsive to the presence detection. [0050] the capture device is a video camera that is used to provide a video image … obtained from the sensor.
It would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to take the teachings of Sprenger and apply them on the teachings of Morzhakov-Ren to further implement wherein the privacy preserving capturing device is a radar. One would be motivated to do so because in order to improve better system and method to provide a radar sensor to detect presence of at least one person within a detection zone about the system and to output a detection notification responsive to the presence detection (Sprenger, [0071).
Regarding claim 8:
[Rejection rational for claim 3 is applicable].
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HIEN DOAN whose telephone number is 571 272-4317. The examiner can normally be reached on Monday-Thursday and biweekly Friday 9am-6pm.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, SRIVASTAVA VIVEK can be reached on 571-272-7304(571)272-7304. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/HIEN V DOAN/Examiner, Art Unit 2449
/VIVEK SRIVASTAVA/Supervisory Patent Examiner, Art Unit 2449