DETAILED ACTION
This Action is in response to Applicant’s response filed on 10/02/2025. Claims 1-20 are still pending in the present application. This Action is made FINAL.
Response to Arguments
With respect to 35 U.S.C. 101 Rejection: Applicant argues that the amended claims 1 and 16 to include patent-eligible subject matter. After reviewing the amendments and argument filed on 10/02/2025 , the Examiner has withdrawn the previous 101 rejection for the following reason: The claims recites steps and features that are “significantly more” than any alleged judicial exception and/or provide improvement to the technical field.
With respect to 35 U.S.C. 102(a)(1) Rejection: Applicant's arguments filed on 10/02/2025 have been fully considered but are moot in view of the new ground(s) rejection in view of Harichandana et al (“PrivPAS: A real time Privacy-Preserving AI System and applied ethics.”; Harichandana)
Claim Status
Claim(s) 1-5, 7-15 and 17-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Carey (U.S. 20200045267 A1), in view of Harichandana et al (“PrivPAS: A real time Privacy-Preserving AI System and applied ethics.”; Harichandana).
Claim(s) 6 and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Carey (U.S. 20200045267 A1), in view of Harichandana et al (“PrivPAS: A real time Privacy-Preserving AI System and applied ethics.”; Harichandana), and in further view of Kogoshi (U.S. 20160203454 A1).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-5, 7-15 and 17-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Carey (U.S. 20200045267 A1), in view of Harichandana et al (“PrivPAS: A real time Privacy-Preserving AI System and applied ethics.”; Harichandana).
Regarding claim 1, Carey discloses an apparatus for person detection in a security system (Paragraph 69: “With reference to FIG. 1, an analytical recognition system including video observation, surveillance and verification.”), comprising: a memory; and a processor coupled with the memory (Paragraph 59: “the present disclosure may employ various integrated circuit components, e.g., memory elements, processing elements, logic elements, look-up tables, and the like, which may carry out a variety of functions under the control of one or more microprocessors or other control devices”) and configured to:
receive a video stream captured by a camera installed in an environment; (Paragraph 69: “System 100 is a network video and data recorder that includes the ability to record video from one or more cameras 110 (e.g., analog and/or IP camera) … Video cameras 110 connect to a computer 120 across a connection 130.”)
identify a first person in one or more images of the video stream (Paragraph 74: “Non-video frame data may include a count of objects identified (e.g., objects may include people and/or any portion thereof, inanimate objects, animals, vehicles or a user defined and/or developed object) and one or more object properties (e.g., position of an object, position of any portion of an object, dimensional properties of an object, dimensional properties of portions and/or identified features of an object) and relationship properties (e.g., a first object position with respect to a second object),”; Paragraph 78)
extract a plurality of visual attributes of the first person that do not include personal identifiable information from the one or more images; wherein the personal identifiable information (Facial recognition) is a data point unique to the first person that can identify the first person without any other data points; (Paragraph 81: “the data analytics module 140 may, for instance, be configured to detect the behavior of the person by extracting behavioral information from the video data and/or the mobile communication device data. The behavior may include the person looking in a particular direction, reaching for an item of merchandise, purchasing the item of merchandise, traveling along a path at the premises, visiting an aisle or a location at the premises, spending an amount of time at the premises, spending an amount of time at the location at the premises, and/or visiting the premises on a number of separate instances.”); Paragraph 101; Paragraph 121: “the system 100, 200, 300, 400, 500 and/or 600 may be manually programmed to recognize an individual or suspect 605a in an investigation (or prior felon) based on clothing type, piercings, tattoos, hair style, etc. (other than facial recognition which may also be utilized depending on authority of the organization (FBI versus local mall security)). … An image of a suspect 705a may be scanned into the data analytics module 140 and items such as piercings, tattoos, hairstyle, logos, and headgear may be flagged and uploaded into the image database for analyzing later in real time or post time analysis.”)
encode the plurality of visual attributes into a first signature representing the first person; compare the first signature with a plurality of signatures of persons tagged as security risks; (Paragraph 89: “the particular user behavior may be defined by a model 143 of the behavior where the model 143 includes one or more attribute such a size, shape, length, width, aspect ratio or any other suitable identifying or identifiable attribute (e.g., tattoo or other various examples discussed herein). The computer 120 includes a matching algorithm or matching module 141, such as a comparator, that compares the defined characteristics and/or the model 143 of the particular user behavior with user behavior in the defined non-video data. Indication of a match by the matching algorithm or module 141 generates an investigation wherein the investigation includes the video data and/or non-video data identified by the matching module 141”) and
generate a security alert in response to the first signature corresponding to a second signature of the plurality of signatures based on comparing the first signature with the plurality of signatures of persons tagged as security risks. (Paragraph 90: “The investigation may be sent to other cameras or systems on a given network or provided over a community of networks to scan for a match or identify and alert. Matching module 141 may be configured as an independent module or incorporated into the data analytics module 140 in the computer 120 or in any cameras 110. The data analytics module 140 may also include a comparator module 142 configured to compare the model 143 of the particular user behavior and the non-video data.”; Paragraph 117; Paragraph 121: “An image of a suspect 705a may be scanned into the data analytics module 140 and items such as piercings, tattoos, hairstyle, logos, and headgear may be flagged and uploaded into the image database for analyzing later in real time or post time analysis. For example, if the individual 605a robs a convenient store and his/her facial image is captured onto one or more cameras 610, not only may his/her image be uploaded to all the cameras 610, but other identifying information or characteristics or traits 615 as well, e.g., hair style, tattoos, piercings, jewelry, clothing logos, etc. If the thief 605a enters the store again, an alert will automatically be sent to security. Even if the system recognizes a similar trait 615 on a different person that person may be deemed a suspect for questioning by authorities..”)
However, Carey does not disclose filter one or more images of the video stream by executing a machine learning model trained to detect and remove visuals related to various types of personal identifiable information from the one or more images; identify a first person in the filtered one or more images of the video stream;
Harichandana discloses receive a video stream captured by a camera installed in an environment; (Fig.1 and III. Dataset: Curation and Preparation: A. Curation: “Datasets targeting mobility aids like the Mobility Aids dataset [8] have a large number of images for the object detection task. But we observe that since these images are curated using video feeds, there is a high image similarity between images captured in consecutive frames and thus, the uniqueness of information within the dataset is very limited.”)
filter one or more images of the video stream by executing a machine learning model trained to detect and remove visuals related to various types of personal identifiable information from the one or more images; (Fig.4 shows the output of different augmentations applied to a sample image. ; C. Data Anonymization: “Faces represent a general and ubiquitous type of private information, we aim to determine the capability of our model to train only on face anonymized dataset and still perform with significant accuracy… We use Gaussian blurring for face de-identification. We use the state-of-the-art ML Kit Face Detection model to detect face contour and generate a bounding box for each face … A crop of the bounding box is then extracted and we apply Gaussian blurring using the OpenCV library. The Gaussian blurring algorithm scans over each pixel of the cropped image. … The values are verified experimentally to effectively remove all identifying facial features. Fig. 5 illustrates the sample results of the entire procedure.”)
identify a first person in the filtered one or more images of the video stream; (Figs. 7-8; IV. MODEL: “The pipeline we propose consists of object detection followed by eye-gaze detection as shown in Fig. 1.”; V: Result and Discussion: “ As shown in Fig. 7, considering model size and performance, M_Final is the optimal model which has a minimal memory footprint of 8.49MB after quantization and performs with an mAP of 89.52% on the validation set … We observe that our model achieves an mAP of 74.51% as illustrated in Fig. 8. The figure also shows the precision-recall curves for both classes and sample model outputs. This shows that our object detection module performs well with a minimal memory footprint.”)
Therefore, it would been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Carey by including on-device real time Privacy-Preserving AI System (PrivPAS), trained on a custom dataset of annotated images that is taught by Harichandana, to make the invention that A real time Privacy-Preserving AI System and applied ethics; thus, one of ordinary skilled in the art would have been motivated to combine the references since this will improving the accuracy of object detection as well as enhancing the privacy of individuals with disabilities. (Harichandana : Conclusion)
Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filling date of the claimed invention.
Regarding claim 2, Carey, as modified by Harichandana, discloses all the claims invention. Carey further discloses the processor is further configured to store the first signature in the plurality of signatures. (Paragraph 81: “the data analytics module 140 may be configured to detect a behavior of the person and store in the profile behavioral data corresponding to the behavior. … The behavior may include the person looking in a particular direction, reaching for an item of merchandise, purchasing the item of merchandise, traveling along a path at the premises, visiting an aisle or a location at the premises, spending an amount of time at the premises, spending an amount of time at the location at the premises, and/or visiting the premises on a number of separate instances.”)
Regarding claim 3, Carey, as modified by Harichandana, discloses all the claims invention. Carey further discloses the video stream is received at a first time, wherein the processor is further configured to: receive, prior to the first time, a prior video stream including images of the first person and a tag indicating that the first person is a security risk; generate the second signature of the first person; and store the second signature in the plurality of signatures. Paragraph 121: “An image of a suspect 705a may be scanned into the data analytics module 140 and items such as piercings, tattoos, hairstyle, logos, and headgear may be flagged and uploaded into the image database for analyzing later in real time or post time analysis. For example, if the individual 605a robs a convenient store and his/her facial image is captured onto one or more cameras 610, not only may his/her image be uploaded to all the cameras 610, but other identifying information or characteristics or traits 615 as well, e.g., hair style, tattoos, piercings, jewelry, clothing logos, etc. If the thief 605a enters the store again, an alert will automatically be sent to security. Even if the system recognizes a similar trait 615 on a different person that person may be deemed a suspect for questioning by authorities.”)
Regarding claim 4, Carey, as modified by Harichandana, discloses all the claims invention. Carey further discloses the processor is further configured to receive a user input including the tag. ( Paragraph 21; Paragraph 121: For example, if the individual 605a robs a convenient store and his/her facial image is captured onto one or more cameras 610, not only may his/her image be uploaded to all the cameras 610, but other identifying information or characteristics or traits 615 as well, e.g., hair style, tattoos, piercings, jewelry, clothing logos, etc. If the thief 605a enters the store again, an alert will automatically be sent to security.”)
Regarding claim 5, Carey, as modified by Harichandana, discloses all the claims invention. Carey further discloses the processor is further configured to: detect a security event caused by the first person; and generate the tag indicating that the first person is the security risk. (Paragraph 121: An image of a suspect 705a may be scanned into the data analytics module 140 and items such as piercings, tattoos, hairstyle, logos, and headgear may be flagged and uploaded into the image database for analyzing later in real time or post time analysis. … For example, if the individual 605a robs a convenient store and his/her facial image is captured onto one or more cameras 610, not only may his/her image be uploaded to all the cameras 610, but other identifying information or characteristics or traits 615 as well, e.g., hair style, tattoos, piercings, jewelry, clothing logos, etc. If the thief 605a enters the store again, an alert will automatically be sent to security.”)
Regarding claim 7, Carey, as modified by Harichandana, discloses all the claims invention. Carey further discloses the plurality of visual attributes comprises one or more of: attire, gender, ethnicity, age group, hair color, or gait. (Paragraph 121: “The system 100, 200, 300, 400, 500 and/or 600 may be manually programmed to recognize an individual or suspect 605a in an investigation (or prior felon) based on clothing type, piercings, tattoos, hair style, etc. (other than facial recognition which may also be utilized depending on authority of the organization (FBI versus local mall security)). An image of a suspect 705a may be scanned into the data analytics module 140 and items such as piercings, tattoos, hairstyle, logos, and headgear may be flagged and uploaded into the image database for analyzing later in real time or post time analysis”)
Regarding claim 8, Carey, as modified by Harichandana, discloses all the claims invention. Carey further discloses the environment is a retail environment and the first person is tagged for theft, wherein the second signature is associated with an identifier of a product that was stolen and the first signature is associated with the identifier of the product. (Paragraph 121: An image of a suspect 705a may be scanned into the data analytics module 140 and items such as piercings, tattoos, hairstyle, logos, and headgear may be flagged and uploaded into the image database for analyzing later in real time or post time analysis. … For example, if the individual 605a robs a convenient store and his/her facial image is captured onto one or more cameras 610, not only may his/her image be uploaded to all the cameras 610, but other identifying information or characteristics or traits 615 as well, e.g., hair style, tattoos, piercings, jewelry, clothing logos, etc. If the thief 605a enters the store again, an alert will automatically be sent to security.”)
Regarding claim 9, Carey, as modified by Harichandana, discloses all the claims invention. Carey further discloses the environment is a retail environment and the first person is tagged for theft, wherein the second signature is associated with a location, in the retail environment, where a product was stolen and the first signature is associated with the location where the first person is standing. (Paragraph 121-122: An image of a suspect 705a may be scanned into the data analytics module 140 and items such as piercings, tattoos, hairstyle, logos, and headgear may be flagged and uploaded into the image database for analyzing later in real time or post time analysis. … For example, if the individual 605a robs a convenient store and his/her facial image is captured onto one or more cameras 610, not only may his/her image be uploaded to all the cameras 610, but other identifying information or characteristics or traits 615 as well, e.g., hair style, tattoos, piercings, jewelry, clothing logos, etc. If the thief 605a enters the store again, an alert will automatically be sent to security … may also generate a library of individuals and/or patrons that regularly frequent or visit a particular location thereby eliminating the need to track these particular individuals and allowing the system 100, 200, 300, 400, 500 and/or 600 to focus on identification and tracking of individuals not previously identified and saved in the library.”)
Regarding claim 10, Carey, as modified by Harichandana, discloses all the claims invention. Carey further discloses the processor is further configured to: compare the first signature with a second plurality of signatures of persons tagged as non-security risks; and store movement information of the first person in response to the first signature corresponding to a third signature of the second plurality of signatures based on comparing the first signature with the second plurality of signatures of persons tagged as non-security risks. (Paragraph 109: “The data analytics module 140 may also be configured to track abnormal velocity of patrons 504a-504l and/or individuals arriving or departing from a particular location 520. A typical arrival and/or departure velocity may be preset or obtained from an algorithm of previous individuals that may have arrived or departed from a particular location over a preset or variable amount of time.”; Paragraph 153: “At block 906, the profile data generated at block 904 is normalized based on one or more normalization criteria. For example, the profile data can be normalized based on (1) the number of visits that people have made to a particular location (e.g., a store location having one or more cameras 110 and antennae 150 by which video data and/or mobile communication device data was captured at block 902), (2) durations of time for which people have remained at a particular location, and/or (3) a frequency or repetition rate of visits that people have made to a particular location. This may be useful to identify repeat customers, a criminal casing a store before committing a robbery, and/or the like”)
Regarding claim 11, Carey discloses A method for person detection in a security system, (Paragraph 69: “With reference to FIG. 1, an analytical recognition system including video observation, surveillance and verification.”) comprising:
receiving a video stream captured by a camera installed in an environment; (Paragraph 69: “System 100 is a network video and data recorder that includes the ability to record video from one or more cameras 110 (e.g., analog and/or IP camera) … Video cameras 110 connect to a computer 120 across a connection 130.”)
identifying a first person in one or more images of the video stream (Paragraph 74: “Non-video frame data may include a count of objects identified (e.g., objects may include people and/or any portion thereof, inanimate objects, animals, vehicles or a user defined and/or developed object) and one or more object properties (e.g., position of an object, position of any portion of an object, dimensional properties of an object, dimensional properties of portions and/or identified features of an object) and relationship properties (e.g., a first object position with respect to a second object),”; Paragraph 78)
extracting a plurality of visual attributes of the first person that do not include personal identifiable information from the one or more images; wherein the personal identifiable information (Facial recognition) is a data point unique to the first person that can identify the first person without any other data points; (Paragraph 81: “the data analytics module 140 may, for instance, be configured to detect the behavior of the person by extracting behavioral information from the video data and/or the mobile communication device data. The behavior may include the person looking in a particular direction, reaching for an item of merchandise, purchasing the item of merchandise, traveling along a path at the premises, visiting an aisle or a location at the premises, spending an amount of time at the premises, spending an amount of time at the location at the premises, and/or visiting the premises on a number of separate instances.”); Paragraph 101; Paragraph 121: “the system 100, 200, 300, 400, 500 and/or 600 may be manually programmed to recognize an individual or suspect 605a in an investigation (or prior felon) based on clothing type, piercings, tattoos, hair style, etc. (other than facial recognition which may also be utilized depending on authority of the organization (FBI versus local mall security)). … An image of a suspect 705a may be scanned into the data analytics module 140 and items such as piercings, tattoos, hairstyle, logos, and headgear may be flagged and uploaded into the image database for analyzing later in real time or post time analysis.”)
encoding the plurality of visual attributes into a first signature representing the first person; comparing the first signature with a plurality of signatures of persons tagged as security risks; (Paragraph 89: “the particular user behavior may be defined by a model 143 of the behavior where the model 143 includes one or more attribute such a size, shape, length, width, aspect ratio or any other suitable identifying or identifiable attribute (e.g., tattoo or other various examples discussed herein). The computer 120 includes a matching algorithm or matching module 141, such as a comparator, that compares the defined characteristics and/or the model 143 of the particular user behavior with user behavior in the defined non-video data. Indication of a match by the matching algorithm or module 141 generates an investigation wherein the investigation includes the video data and/or non-video data identified by the matching module 141”; Paragraph 121) and
generating a security alert in response to the first signature corresponding to a second signature of the plurality of signatures based on comparing the first signature with the plurality of signatures of persons tagged as security risks. (Paragraph 90: “The investigation may be sent to other cameras or systems on a given network or provided over a community of networks to scan for a match or identify and alert. Matching module 141 may be configured as an independent module or incorporated into the data analytics module 140 in the computer 120 or in any cameras 110. The data analytics module 140 may also include a comparator module 142 configured to compare the model 143 of the particular user behavior and the non-video data.”; Paragraph 117; Paragraph 121: “An image of a suspect 705a may be scanned into the data analytics module 140 and items such as piercings, tattoos, hairstyle, logos, and headgear may be flagged and uploaded into the image database for analyzing later in real time or post time analysis. For example, if the individual 605a robs a convenient store and his/her facial image is captured onto one or more cameras 610, not only may his/her image be uploaded to all the cameras 610, but other identifying information or characteristics or traits 615 as well, e.g., hair style, tattoos, piercings, jewelry, clothing logos, etc. If the thief 605a enters the store again, an alert will automatically be sent to security. Even if the system recognizes a similar trait 615 on a different person that person may be deemed a suspect for questioning by authorities..”)
However, Carey does not disclose filtering one or more images of the video stream by executing a machine learning model trained to detect and remove visuals related to various types of personal identifiable information from the one or more images; identifying a first person in the filtered one or more images of the video stream;
Harichandana discloses receiving a video stream captured by a camera installed in an environment; (Fig.1 and III. Dataset: Curation and Preparation: A. Curation: “Datasets targeting mobility aids like the Mobility Aids dataset [8] have a large number of images for the object detection task. But we observe that since these images are curated using video feeds, there is a high image similarity between images captured in consecutive frames and thus, the uniqueness of information within the dataset is very limited.”)
filtering one or more images of the video stream by executing a machine learning model trained to detect and remove visuals related to various types of personal identifiable information from the one or more images; (Fig.4 shows the output of different augmentations applied to a sample image. ; C. Data Anonymization: “Faces represent a general and ubiquitous type of private information, we aim to determine the capability of our model to train only on face anonymized dataset and still perform with significant accuracy… We use Gaussian blurring for face de-identification. We use the state-of-the-art ML Kit Face Detection model to detect face contour and generate a bounding box for each face … A crop of the bounding box is then extracted and we apply Gaussian blurring using the OpenCV library. The Gaussian blurring algorithm scans over each pixel of the cropped image. … The values are verified experimentally to effectively remove all identifying facial features. Fig. 5 illustrates the sample results of the entire procedure.”)
identifying a first person in the filtered one or more images of the video stream; (Figs. 7-8; IV. MODEL: “The pipeline we propose consists of object detection followed by eye-gaze detection as shown in Fig. 1.”; V: Result and Discussion: “ As shown in Fig. 7, considering model size and performance, M_Final is the optimal model which has a minimal memory footprint of 8.49MB after quantization and performs with an mAP of 89.52% on the validation set … We observe that our model achieves an mAP of 74.51% as illustrated in Fig. 8. The figure also shows the precision-recall curves for both classes and sample model outputs. This shows that our object detection module performs well with a minimal memory footprint.”)
Therefore, it would been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Carey by including on-device real time Privacy-Preserving AI System (PrivPAS), trained on a custom dataset of annotated images that is taught by Harichandana, to make the invention that A real time Privacy-Preserving AI System and applied ethics; thus, one of ordinary skilled in the art would have been motivated to combine the references since this will improving the accuracy of object detection as well as enhancing the privacy of individuals with disabilities. (Harichandana : Conclusion)
Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filling date of the claimed invention.
Regarding claim 12, Carey, as modified by Harichandana, discloses all the claims invention. Carey further discloses storing the first signature in the plurality of signatures. (Paragraph 81: “the data analytics module 140 may be configured to detect a behavior of the person and store in the profile behavioral data corresponding to the behavior. … The behavior may include the person looking in a particular direction, reaching for an item of merchandise, purchasing the item of merchandise, traveling along a path at the premises, visiting an aisle or a location at the premises, spending an amount of time at the premises, spending an amount of time at the location at the premises, and/or visiting the premises on a number of separate instances.”)
Regarding claim 13, Carey, as modified by Harichandana, discloses all the claims invention. Carey further discloses the video stream is received at a first time, further comprising: receiving, prior to the first time, a prior video stream including images of the first person and a tag indicating that the first person is a security risk; generating the second signature of the first person; and storing the second signature in the plurality of signatures. Paragraph 121: “An image of a suspect 705a may be scanned into the data analytics module 140 and items such as piercings, tattoos, hairstyle, logos, and headgear may be flagged and uploaded into the image database for analyzing later in real time or post time analysis. For example, if the individual 605a robs a convenient store and his/her facial image is captured onto one or more cameras 610, not only may his/her image be uploaded to all the cameras 610, but other identifying information or characteristics or traits 615 as well, e.g., hair style, tattoos, piercings, jewelry, clothing logos, etc. If the thief 605a enters the store again, an alert will automatically be sent to security. Even if the system recognizes a similar trait 615 on a different person that person may be deemed a suspect for questioning by authorities.”)
Regarding claim 14, Carey, as modified by Harichandana, discloses all the claims invention. Carey further discloses receiving a user input including the tag. (Paragraph 21; Paragraph 121: For example, if the individual 605a robs a convenient store and his/her facial image is captured onto one or more cameras 610, not only may his/her image be uploaded to all the cameras 610, but other identifying information or characteristics or traits 615 as well, e.g., hair style, tattoos, piercings, jewelry, clothing logos, etc. If the thief 605a enters the store again, an alert will automatically be sent to security.”)
Regarding claim 15, Carey, as modified by Harichandana, discloses all the claims invention. Carey further discloses further comprising: detecting a security event caused by the first person; and generating the tag indicating that the first person is the security risk. (Paragraph 121: An image of a suspect 705a may be scanned into the data analytics module 140 and items such as piercings, tattoos, hairstyle, logos, and headgear may be flagged and uploaded into the image database for analyzing later in real time or post time analysis. … For example, if the individual 605a robs a convenient store and his/her facial image is captured onto one or more cameras 610, not only may his/her image be uploaded to all the cameras 610, but other identifying information or characteristics or traits 615 as well, e.g., hair style, tattoos, piercings, jewelry, clothing logos, etc. If the thief 605a enters the store again, an alert will automatically be sent to security.”)
Regarding claim 17, Carey, as modified by Harichandana, discloses all the claims invention. Carey further discloses the plurality of visual attributes comprises one or more of: attire, gender, ethnicity, age group, hair color, or gait.(Paragraph 121: “The system 100, 200, 300, 400, 500 and/or 600 may be manually programmed to recognize an individual or suspect 605a in an investigation (or prior felon) based on clothing type, piercings, tattoos, hair style, etc. (other than facial recognition which may also be utilized depending on authority of the organization (FBI versus local mall security)). An image of a suspect 705a may be scanned into the data analytics module 140 and items such as piercings, tattoos, hairstyle, logos, and headgear may be flagged and uploaded into the image database for analyzing later in real time or post time analysis”)
Regarding claim 18, Carey, as modified by Harichandana, discloses all the claims invention. Carey further discloses the environment is a retail environment and the first person is tagged for theft, wherein the second signature is associated with an identifier of a product that was stolen and the first signature is associated with the identifier of the product. (Paragraph 121: An image of a suspect 705a may be scanned into the data analytics module 140 and items such as piercings, tattoos, hairstyle, logos, and headgear may be flagged and uploaded into the image database for analyzing later in real time or post time analysis. … For example, if the individual 605a robs a convenient store and his/her facial image is captured onto one or more cameras 610, not only may his/her image be uploaded to all the cameras 610, but other identifying information or characteristics or traits 615 as well, e.g., hair style, tattoos, piercings, jewelry, clothing logos, etc. If the thief 605a enters the store again, an alert will automatically be sent to security.”)
Regarding claim 19, Carey, as modified by Harichandana, discloses all the claims invention. Carey further discloses the environment is a retail environment and the first person is tagged for theft, wherein the second signature is associated with a location, in the retail environment, where a product was stolen and the first signature is associated with the location where the first person is standing. (Paragraph 121-122: An image of a suspect 705a may be scanned into the data analytics module 140 and items such as piercings, tattoos, hairstyle, logos, and headgear may be flagged and uploaded into the image database for analyzing later in real time or post time analysis. … For example, if the individual 605a robs a convenient store and his/her facial image is captured onto one or more cameras 610, not only may his/her image be uploaded to all the cameras 610, but other identifying information or characteristics or traits 615 as well, e.g., hair style, tattoos, piercings, jewelry, clothing logos, etc. If the thief 605a enters the store again, an alert will automatically be sent to security … may also generate a library of individuals and/or patrons that regularly frequent or visit a particular location thereby eliminating the need to track these particular individuals and allowing the system 100, 200, 300, 400, 500 and/or 600 to focus on identification and tracking of individuals not previously identified and saved in the library.”)
Regarding claim 20, Carey, as modified by Harichandana, discloses all the claims invention. Carey further discloses further comprising: comparing the first signature with a second plurality of signatures of persons tagged as non-security risks; and storing movement information of the first person in response to the first signature corresponding to a third signature of the second plurality of signatures based on comparing the first signature with the second plurality of signatures of persons tagged as non-security risks. (Paragraph 109: “The data analytics module 140 may also be configured to track abnormal velocity of patrons 504a-504l and/or individuals arriving or departing from a particular location 520. A typical arrival and/or departure velocity may be preset or obtained from an algorithm of previous individuals that may have arrived or departed from a particular location over a preset or variable amount of time.”; Paragraph 153: “At block 906, the profile data generated at block 904 is normalized based on one or more normalization criteria. For example, the profile data can be normalized based on (1) the number of visits that people have made to a particular location (e.g., a store location having one or more cameras 110 and antennae 150 by which video data and/or mobile communication device data was captured at block 902), (2) durations of time for which people have remained at a particular location, and/or (3) a frequency or repetition rate of visits that people have made to a particular location. This may be useful to identify repeat customers, a criminal casing a store before committing a robbery, and/or the like”)
Claim(s) 6 and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Carey (U.S. 20200045267 A1), in view of Harichandana et al (“PrivPAS: A real time Privacy-Preserving AI System and applied ethics.”; Harichandana), and in further view of Kogoshi (U.S. 20160203454 A1).
Regarding claim 6, Carey, as modified by Harichandana, discloses all the claims invention except wherein the processor is further configured to: compute a distance between respective data representing the first signature and the second signature; and determine that the first signature corresponds to the second signature in response to the distance being less than a threshold distance.
Kogoshi discloses the processor is further configured to: compute a distance between respective data representing the first signature (second feature amount) and the second signature(registered individual set); and determine that the first signature corresponds to the second signature in response to the distance being less than a threshold distance. (Paragraphs 47-48: “Further, the individual determination section 213 acquires the second feature amount of the customer from the captured image (Act S20). Then, the individual determination section 213 compares the second feature amount of the customer with that of each registered individual set as a comparison target to calculate the similarity degree therebetween (Act S21). Next, the individual determination section 213 determines whether or not there is a similarity degree greater than a threshold value within the calculated similarity degrees (Act S22).If it is determined in Act S22 that there is a similarity degree greater than a threshold value (Act S22: Yes), the individual determination section 213 determines that the customer has the greatest similarity degree in the second feature amount to the registered individual and thus the registered customer comes to the store, and then Act S23 is taken”)
Therefore, it would been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Carey and Harichandana by including The similarity degree calculation module compares each feature amount acquired by the acquisition module with pre-stored corresponding feature amount of a specific individual that is taught by Kogoshi, to make the invention that apparatus and a method for recognizing a specific person; thus, one of ordinary skilled in the art would have been motivated to combine the references since this will improving recognize or specify a person.
Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filling date of the claimed invention.
Regarding claim 16, Carey, as modified by Harichandana, discloses all the claims invention except further comprising: computing a distance between respective data representing the first signature and the second signature; and determining that the first signature corresponds to the second signature in response to the distance being less than a threshold distance.
Kogoshi discloses further comprising: computing a distance between respective data representing the first signature (second feature amount) and the second signature (registered individual set); and determining that the first signature corresponds to the second signature in response to the distance being less than a threshold distance. (Paragraphs 47-48: “Further, the individual determination section 213 acquires the second feature amount of the customer from the captured image (Act S20). Then, the individual determination section 213 compares the second feature amount of the customer with that of each registered individual set as a comparison target to calculate the similarity degree therebetween (Act S21). Next, the individual determination section 213 determines whether or not there is a similarity degree greater than a threshold value within the calculated similarity degrees (Act S22).If it is determined in Act S22 that there is a similarity degree greater than a threshold value (Act S22: Yes), the individual determination section 213 determines that the customer has the greatest similarity degree in the second feature amount to the registered individual and thus the registered customer comes to the store, and then Act S23 is taken”)
Therefore, it would been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Carey and Harichandana by including The similarity degree calculation module compares each feature amount acquired by the acquisition module with pre-stored corresponding feature amount of a specific individual that is taught by Kogoshi, to make the invention that apparatus and a method for recognizing a specific person; thus, one of ordinary skilled in the art would have been motivated to combine the references since this will improving recognize or specify a person.
Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filling date of the claimed invention.
Relevant Prior Art Directed to State of Art
Gurwicz et al (U.S 20140328512 A1), “System and Method for Suspect Search”, teaches about a system and method may generate a first signature for an object of interest based on an image of the object of interest. A system and method may generate a second signature for a candidate object based on an image of the candidate object. A system and method may calculate a similarity score by relating the first signature to the second signature and may determine the image of the candidate object is an image of the object of interest based on the similarity score.
Siminoff (U.S. 20180268674 A1), “Dynamic Identification Of Threat Level Associated With A Person Using An Audio/Video Recording And Communication Device”, teaches about a method for notifying a user of a threat level associated with a person within the field of view of a camera of an A/V recording and communication device is provided, the method comprising receiving, from the camera, identification data for the person; transmitting the received identification data to at least one backend server; receiving, from the backend server, information about a threat level associated with the person; and notifying the user of the threat level.
Khadloya et al (U.S. 20220292902 A1), “Multiple- Factor Recognition and Validation for Security System”, teaches about an access control method can include receiving candidate information about a face and gesture from a first individual and receiving other image information from or about a second individual. The candidate information can be analyzed using a neural network-based recognition processor that can provide a first recognition result indicating whether the first individual corresponds to a first enrollee of the security system, and can provide a second recognition result indicating whether the second individual corresponds to a second enrollee of the security system.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Duy A Tran whose telephone number is (571)272-4887. The examiner can normally be reached Monday-Friday 8:00 am - 5:00 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ONEAL R MISTRY can be reached at (313)-446-4912. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DUY TRAN/ Examiner, Art Unit 2674
/ONEAL R MISTRY/ Supervisory Patent Examiner, Art Unit 2674