DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This communication is in response to the remarks filed 10/27/2025. Claims 1-20 are pending.
Response to Remarks
Applicant’s arguments have been respectfully considered but are not persuasive. Accordingly, this action has been made FINAL.
Specification
The objections to the disclosure are removed in accordance with remarks and amendments filed.
Claim Rejections – 35 USC § 101
On pages 16-17 of the remarks, Applicant asserts:
“There is a fundamental difference in scale and complexity between a human visually checking footage from a few cameras and a computer system processing a vast amount of data streamed from hundreds or thousands of cameras in real-time to efficiently search for a specific target person. The latter is impossible for a human to perform mentally or manually... Instead of treating all camera images as search targets, the invention dynamically narrows down the set of camera images to be searched based on specific technical rules... This ‘narrowing down’ process is not a mere idea; it yields a technical improvement by directly reducing the load on the computer’s processor and memory, thereby improving speed and efficiency of the search process.”
The Examiner respectfully disagrees. Assuming, arguendo, that the invention dynamically narrows down the set of camera images to be searched based on specific technical rules, it does not provide the means to perform such function of “narrowing down” a camera image. The narrowing down appears to be performed on a “plurality of camera images” (e.g., 10 camera images), as recited in the claims, and does not impose a limit to which a human mind cannot perform such “narrowing down” as a mental process. The independent claim(s) are recited at such high level of generality that does not provide meaningful improvement to the technicality of the invention. That is, the claims do not expressly recite the “reducing the load on the computer’s processor and memory, thereby improving speed and efficiency of the search process”.
On pages 21-22, Applicant argues “interpretation must be given to the claims and such broadest reasonable interpretation is an interpretation which is, inter alia, consistent with the specification” and “that consideration must nonetheless be consistent with the specification to meet the above requirements of MPEP 2111 and MPEP 2106(II). But in this case, the specification does not support the rejection’s interpretation nor would the rejection’s interpretation be what one of ordinary skill in the art would have understood from the claims and specification.”
The Examiner respectfully disagrees. Although the claims are given their broadest reasonable interpretation and are consistent with the specification, nothing in the claims recite steps or functions that suggest a human cannot perform the “narrowing down” of a camera mentally. MPEP 2111.01 Plain Meaning sections I, II, and III disclose the words of a claim must be given their “plain meaning” unless such meaning is inconsistent with the specification, it is improper to import claim limitations from the specification, and “plain meaning” refers to the ordinary and customary meaning given to the term by those of ordinary skill in the art, respectively. That is, the determining aspect of the claim does not encompass how a “narrowing down” process is performed other than the use of generic computer components recited at a high level of generality. The claims do not recite a means-plus-function language and therefore may not import language from the specification into the claims. It is noted that the features upon which applicant relies (i.e., reducing the load on the computer’s processor and memory, thereby improving the speed and efficiency of the search process) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993).
On page 24, Applicant argues “The series of steps is not merely the execution of an abstract task on a generic computer. Step 2 intelligently reduces the search space using specific, unconventional rules. Step 3 then improves the overall system performance by executing the search only within that reduced space. Step 3 then improves the overall system performance by executing the search only within that reduced space.”
The Examiner respectfully disagrees. The recited step 2 of pg. 24 does not provide enough information that may otherwise unconventionally determine “a camera image for which each of the surveillance target persons is searched” because it is commonly known to use weather conditions, schedule information based on time, or behavior tendency of a target to obtain a camera image based on the unusual characteristics of the target at hand. That is, a human mind is capable of doing such tasks mentally by looking at a plurality of images (e.g., two or more camera images) and make a determination of what type of images should be searched through (e.g., based on behavior anomaly as shown in the images) instead of searching all the images one by one.
Additionally, on page 24, Applicant argues “the features recited in the dependent claims – such as providing a camera image in real-time (claim 2), ranking candidate information based on the size of the target person (claim 4), determining a missing person (claim 7), and generating a behavior report (claim 8) – further concretize this core inventive concept and enhance the system’s functionality.”
The Examiner respectfully disagrees. Each of the recited dependent claims recite abstract ideas themselves, such as performing mental processes of observation and evaluation, or are mere extra-solution activities in the form of data gathering. That is, claim 2 is providing a camera image as data gathering of post-solution activity after a determination. Claims 4, 7 and 8 recite mental processes because a person can mentally rank candidates based on the size of the target person by observing the images and writing down a ranking (claim 4), determine a missing person by observing and evaluating images (claim 7), and generate a behavior report after observing the images and evaluating the behavior characteristics based on the images to indicate a target person’s behavior throughout the day (claim 8). None of these, whether considered alone or in combination, amount to significantly more than the judicial exception and do not include elements indicative of an integration into a practical application.
In summary, the Examiner respectfully disagrees with Applicant’s arguments as stated above in addition to the reasons indicated below:
As shown on pages 6-8 of the Non-Final Office Action dated 07/25/2025 and in addition to the current 101 rejection shown below, independent claims 1, 9 and 15 recite abstract ideas under Step 2A Prong 1 analysis because the limitations of “narrowing down a camera image”, as recited in the currently amended claims, are limitations, under the broadest reasonable interpretation (BRI), that cover the performance of the limitations in the mind which fall within the “Mental Process” grouping of abstract ideas. That is, the recited limitations can be performed through an observation and evaluation of the human mind. For example, the human mind can determine (i.e., narrow down) a camera image that includes a person of interest based on the weather as seen within the image, an unusual person at a scene when an establishment is closed during certain scheduled hours, or by unusual/suspicious behavior the person is showing that is out of the ordinary from among a few camera images or footage of images via observation. This observation and evaluation can then narrow down specific cameras images and determine which images contain the target person, thus providing nothing more than a mental process.
Applicant argues, on page 17 of the remarks, that “the invention dynamically narrows down the set of camera images to be searched based on specific technical rules, namely ‘weather information,’ ‘schedule information,’ and ‘behavior tendency’”. However, these arguments do not provide a limitation that will obviate the abstract idea of mental processes from being interpreted under Step 2A Prong 1, as shown above, because the recited claims still fall within the scope of abstract ideas.
Similarly, on pages 21-22 of the remarks, Applicant argues “the broadest reasonable interpretation of each claim as a whole is supposed to be a boundary on the Step 2A Prong 1 analysis” and asserts that “Or in other words, the ‘but for’ computer component analysis at Step 2A Prong 1, be those components generic or not, must exist within a BRI or else the ‘bounds’ of Step 2A Prong 1 are exceeded”. However, as mentioned above, the BRI of the claims as a whole only includes a generic computer components such as a processor and memory at a high level of generality and does not impose meaningful limits to the claims that would amount to significantly more than the judicial exception. That is, the processor and memory amount to no more than mere instructions to apply the exception using a generic computer.
Furthermore, on pages 23-24 of the remarks, Applicant argues “the ordered combination of elements constitutes an inventive concept that is significantly more than the idea itself” and describes a series of steps (Steps 1-3) in which the “series of steps is not merely the execution of an abstract task on a generic computer” because “the crucial point is that these generic components are especially programmed to execute the unconventional process of the present invention”. However, as described above, these generic computer components are not recited to have special instructions and amount to no more than mere instructions to apply the exception using a generic computer on the abstract idea. Moreover, the inventive concept, as described on page 17 of the remarks, which “reduc[es] the load on the computer’s processor and memory” as well as page 24 “efficiently managing and searching for information from a multitude of surveillance cameras” and paragraphs [0008] and [0009] of the application specification, is not readily applied by the currently amended claims because the claims recite only a “plurality of camera images” which still attainable for the human mind to process mentally.
Therefore, the Examiner cannot, respectfully, withdraw the 101 rejections as explained above and maintains the rejections as the argued limitations were written broad such that they recite abstract ideas, such as mental processes of observation and evaluation, or recite additional elements in the form of extra-solution activities that pertain to data gathering and do not include additional elements within them to be indicative of an integration into a practical application under Step 2A Prong 2 or additional elements that amount to significantly more than the judicial exception. As a result, the claims stand rejected as follows.
Examiner Suggestion(s)
The Examiner suggests adding to the claims the specifics as to how the “narrowing down” of a camera image is performed. That is, what is performing/executing the function of narrowing down the camera image so as to improve the computer’s processing speed or on how many of these camera images are processed at once. The currently recited claims lack such specifics so as to obviate the steps of performing the narrowing down of a camera image as a mental process abstract idea. Alternatively, if Applicant believes the specific combination of claimed elements constitutes an inventive concept that is “significantly more” than the abstract idea with a technical effect/improvement, then the claim language should reflect such concept. That is, for example, if the claims when viewed as a whole, reduce the processing load of a computer system, then an amendment to the claims should include details as to how the technical problem, i.e., “as the number of surveillance cameras increases, the amount of generated image data becomes enormous”, is solved. The currently presented claims only limit the abstract idea of “narrowing down” a camera image to a mere baseline of few camera images (e.g., less than 10). If the technical solution of the claims is to improve this aspect so that it is impossible for a human to perform mentally or manually, then the independent claims should recite a specific number of camera images processed at once (e.g., a large amount of camera images in the hundreds or thousands processed in real-time), or a specific environment in which the abstract idea is applied to, for integration into a practical application to amount to significantly more.
Claim Rejections – 35 USC § 102 and 103
On page 26, Applicant argues “Koyama does not set forth to ‘narrow down a camera image... from among the plurality of camera images’ much less ‘based on at least one of weather information on a day, schedule information of each surveillance target person on the day, and a behavior tendency of each surveillance target person within a facility’.”
However, the Examiner respectfully disagrees. Koyama discloses such limitations, for example, at [0058] “In the example illustrated in FIG. 2, the appearance record storage unit 22 associates and stores, for each person ID for identifying the monitored subject, a person image ID, which is an identifier identifying subject-image-capture-information, person identification information representing the subject-identification-information, the time of image capture, and the name of the camera that captured the image”. That is, the image of the target is shown as “person image ID” and is narrowed down based on the characteristic rules (e.g., behavior detection/tendency, see, for example, [0080]-[0082]) as explained at [0088] “The result output unit 17 outputs the monitored subject identified by the behavior detection unit 25. The result output unit 17 may store output results in the output result storage unit 20. FIG. 4 is a diagram illustrating an example of information stored in the output result storage unit 20. As illustrated in FIG. 4, the result output unit 17 may output the person image ID identifying an image of a person, a time of capture of the image of the person, and the camera with which the image is captured. Alternatively, the result output unit 17 may output only a person image ID.”
Therefore, the argued limitations were written broad such that they read upon the cited references or are shown explicitly by the references. As a result, the claims stand rejected as follows.
Claim Objections
Claim(s) 1, 9 and 15 are objected to because of the following informalities:
Claim(s) 1, 9 and 15 should recite “narrow down a camera image for which each of the surveillance target persons is searched, from among the plurality of camera images, based on at least one of weather information on a day, schedule information of each surveillance target person on the day, [[and]] or a behavior tendency of each surveillance target person within a facility;” in order to avoid typographical error(s) and avoid 112b rejection clarity issues.
Appropriate correction is required.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., law of nature, a natural phenomenon, or an abstract idea) without significantly more.
Regarding Independent claim(s) 1, 9 and 15 and its dependent claim(s) 2-8, 10-14, and 16-20:
Step 1 analysis: Claim(s) 1, 9 and 15 are directed to method/apparatus, which falls within one of the four statutory categories.
Step 2A Prong 1 analysis: Claim(s) 1, 9 and 15 recite, in part:
“narrow down a camera image for which each of the surveillance target persons is searched, from among the plurality of camera images, based on at least one of weather information on a day, schedule information of each surveillance target person on the day, and a behavior tendency of each surveillance target person within a facility
The limitations as shown above as drafted, are processes that, under the broadest reasonable interpretation, covers the performance of the limitations in the mind which falls within the “Mental Process” grouping of abstract ideas.
The limitations of:
“narrow down a camera image for which each of the surveillance target persons is searched, from among the plurality of camera images, based on at least one of weather information on a day, schedule information of each surveillance target person on the day, and a behavior tendency of each surveillance target person within a facilityrecite steps that the human mind can also perform through an observation and evaluation, such as the human mind can determine a camera image based on human behavior, weather, or schedule information by looking through camera footage and observing which camera images contain the target person and search for the target person through the images after sorting out which frames contain the target.
Accordingly, the claim(s) recites an abstract idea.
Step 2A Prong 2 analysis: This judicial exception is not integrated into a practical application. In particular, the claim(s) recite the following additional element(s) –
acquire a plurality of camera images photographed by a plurality of cameras installed within a facility;
The step of “acquire a plurality of camera images...” is recited to merely constitute pre-solution activity involving data gathering such as acquiring information/data. Such extra-solution activities as additional elements do not integrate the abstract idea into a practical application. Please see MPEP §2106.05(g). Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim as a whole is directed to an abstract idea. Please see MPEP §2106.04.(a)(2).III.C.
In view of the foregoing, the additional elements do not integrate the abstract idea into a practical application.
Step 2B analysis: there are no additional elements that amount to significantly more than the judicial exception. Moreover, the recited processor and memory are recited at such a high level of generality, such as generic computer components, that do not impose any meaningful limits in the claims that would amount to significantly more than the judicial exception and therefore the claim(s) as a whole are directed to an abstract idea.
For all the foregoing reasons, claims 1, 9 and 15 do not comply with the requirements of 35 U.S.C. 101. Accordingly, the dependent claims 2-8, 10-14, and 16-20 do not provide elements that overcome the deficiencies of the independent claims 1, 9 and 15. Specifically, claims 2-8, 10-14, and 16-20 each recite abstract ideas themselves, such as mental processes of observation and evaluation, or recite additional elements in the form of extra-solution activities that pertain to data gathering and do not include additional elements within them to be indicative of an integration into a practical application, under Step 2A Prong 2 or additional elements that amount to significantly more than the judicial exception. Therefore, claims 2-8, 10-14, and 16-20 are not 35 U.S.C 101 eligible.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1, 5-9, 13-15 and 19-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Koyama (US 20170053191 A1).
Regarding claim 1, Koyama discloses an image processing apparatus comprising: at least one memory configured to store one or more instructions; and at least one processor configured to execute the one or more instructions to (CPU of a computer, Koyama, [0089]): acquire a plurality of camera images photographed by a plurality of cameras installed within a facility (“surveillance cameras in locations such as railroad stations and particular facilities and to analyze images captured with the surveillance cameras to perform various kinds of determination... one camera or multiple cameras” Koyama, [0003], [0004]); narrow down a camera image for which each of the surveillance target persons is searched, from among the plurality of camera images (“In the example illustrated in FIG. 2, the appearance record storage unit 22 associates and stores, for each person ID for identifying the monitored subject, a person image ID, which is an identifier identifying subject-image-capture-information, person identification information representing the subject-identification-information, the time of image capture, and the name of the camera that captured the image”, Koyama, [0058]; That is, the image of the target is shown as “person image ID” and is narrowed down based on the characteristic rules (e.g., behavior detection/tendency, see, for example, [0080]-[0082]) as explained at [0088] “The result output unit 17 outputs the monitored subject identified by the behavior detection unit 25. The result output unit 17 may store output results in the output result storage unit 20. FIG. 4 is a diagram illustrating an example of information stored in the output result storage unit 20. As illustrated in FIG. 4, the result output unit 17 may output the person image ID identifying an image of a person, a time of capture of the image of the person, and the camera with which the image is captured. Alternatively, the result output unit 17 may output only a person image ID.”), based on at least one of weather information on a day, schedule information of each surveillance target person on the day, and a behavior tendency of each surveillance target person within a facility (“In the behavior analysis, for example, one camera or multiple cameras whose coverage areas overlap one another are used to recognize the location of a specific person and changes in the location of the person with time are tracked, thereby identifying where and how long the person stayed.” Koyama, [0004]; “Specifically, the behavior detection unit 25 compares a pattern that can be identified from appearance records stored in the appearance record storage unit 22 with the rules stored in the rule storage unit 24 and identifies the monitored subject that has the appearance record of the pattern that matches the rule.” Koyama, [0086]; [0069] suspicious behavior),(“The monitored subject who is making a movement deviating from normal movements of people who use a facility can be determined to be the monitored subject displaying suspicious behavior.” Koyama, [0071]; “rules specified by the relationships between two or more appearance records including image capture location and the image capture time are stored in the rule storage unit 24. The behavior detection unit 25 identifies the monitored subject that has the generated appearance record that matches any of the rules.” Koyama, [0084]); and search for each of the surveillance target persons from among each of the camera images being determined as a target for which each of the surveillance target persons is searched (“The behavior detection unit 25 identifies the monitored subject that has the appearance record that matches a defined rule. Specifically, the behavior detection unit 25 compares a pattern that can be identified from appearance records stored in the appearance record storage unit 22 with the rules stored in the rule storage unit 24 and identifies the monitored subject that has the appearance record of the pattern that matches the rule... The result output unit 17 outputs the monitored subject identified by the behavior detection unit 25. The result output unit 17 may store output results in the output result storage unit 20. As illustrated in FIG. 4, the result output unit 17 may output the person image ID identifying an image of a person, a time of capture of the image of the person, and the camera with which the image is captured.” Koyama, [0086]-[0088]).
Regarding claim 5, Koyama discloses the image processing apparatus according to claim 1, wherein the processor is further configured to execute the one or more instructions to: determine, based on a result of the search, behavior made by each of the surveillance target persons, and generate behavior history information (“The appearance record storage unit 22 stores the appearance record of the monitored subject. Specifically, the appearance record storage unit 22 stores, for each the monitored subject, the appearance record in which items of information, including a time of image capture, are associated with the monitored subject. The time of image capture can be referred to as the time of appearance of the monitored subject.” Koyama, [0057]; i.e., the recorded image is determined based on the behavior information that includes recorded actions subject to behavior history information); and select (“The result output unit 17 outputs the monitored subject identified by the behavior detection unit 25.” Koyama, [0088]), based on the behavior history information, the camera image that satisfies a predetermined condition for [[the]] each surveillance target person (“The rule storage unit 24 stores rules defining patterns of behavior of monitored subjects to be extracted. As in the present example embodiment, in order to extract the monitored subject who is displaying suspicious behavior, the rule storage unit 24 stores a pattern of suspicious behavior identified from the appearance records as rules.” Koyama, [0069]; i.e., the suspicious behavior satisfies a predetermined condition, or rule, to detect and select the individual with suspicious behavior in the images).
Regarding claim 6, Koyama discloses the image processing apparatus according to claim 5, wherein the behavior history information indicates a content of behavior (“Examples of rules to be defined will be described below. The monitored subject who is making a movement deviating from normal movements of people who use a facility can be determined to be the monitored subject displaying suspicious behavior.” Koyama, [0071]; i.e., rules provide a certain content behavior such as “a person repeatedly appears during a time of day in which usually people do not appear”, Koyama, [0081]), and a behavior execution time (“Specifically, the appearance record storage unit 22 stores, for each the monitored subject, the appearance record in which items of information, including a time of image capture, are associated with the monitored subject.” Koyama, [0057]), and wherein the processor is further configured to execute the one or more instructions to select the camera image photographed at a timing when predetermined behavior is made (“The result output unit 17 outputs the monitored subject identified by the behavior detection unit 25.” Koyama, [0088]; i.e., selected based on behavior, or rules).
Regarding claim 7, Koyama discloses the image processing apparatus according to claim 1, wherein the processor is further configured to execute the one or more instructions to determine, based on a result of the search, a state in which the surveillance target person is not photographed by any of the cameras for a predetermined time or longer (“a rule is defined, for example, that “a person does not have the appearance record indicating an image capture location where the person is expected to appear based on a given image capture location within a certain period after the appearance record indicating the given image capture location”, in the rule storage unit 24 allowing identification of the monitored subject that matches the third rule.” Koyama, [0080]).
Regarding claim 8, Koyama discloses the image processing apparatus according to claim 1, wherein the processor is further configured to execute the one or more instructions to: determine, based on a result of the search, behavior made by each of the surveillance target persons, and generate behavior history information; and generate, based on the behavior history information, a behavior report indicating a behavior content of each of the surveillance target persons in a day, or material information for generating the behavior report, wherein the behavior report includes a text, and the surveillance camera image (“In the example illustrated in FIG. 2, the appearance record storage unit 22 associates and stores, for each person ID for identifying the monitored subject, a person image ID, which is an identifier identifying subject-image-capture-information, person identification information representing the subject-identification-information, the time of image capture, and the name of the camera that captured the image” Koyama, [0058]; “FIG. 11 illustrates an example of information stored in the event storage unit 29. In the example illustrated in FIG. 11, the event storage unit 29 stores the time and date of occurrence of the event and the location of occurrence of the event in association with each other as event information.” [0123]; “in FIG. 3, the rule storage unit 24 stores rule descriptions representing patterns of behavior of monitored subjects to be extracted in association with rule names for identifying rules.” [0070]).
Regarding claim 9, Koyama discloses an image processing method executed by a computer, comprising (CPU of a computer, Koyama, [0089]): acquiring a plurality of camera images photographed by a plurality of cameras installed within a facility (“surveillance cameras in locations such as railroad stations and particular facilities and to analyze images captured with the surveillance cameras to perform various kinds of determination... one camera or multiple cameras” Koyama, [0003], [0004]); narrow down a camera image for which each of the surveillance target persons is searched, from among the plurality of camera images (“In the example illustrated in FIG. 2, the appearance record storage unit 22 associates and stores, for each person ID for identifying the monitored subject, a person image ID, which is an identifier identifying subject-image-capture-information, person identification information representing the subject-identification-information, the time of image capture, and the name of the camera that captured the image”, Koyama, [0058]; That is, the image of the target is shown as “person image ID” and is narrowed down based on the characteristic rules (e.g., behavior detection/tendency, see, for example, [0080]-[0082]) as explained at [0088] “The result output unit 17 outputs the monitored subject identified by the behavior detection unit 25. The result output unit 17 may store output results in the output result storage unit 20. FIG. 4 is a diagram illustrating an example of information stored in the output result storage unit 20. As illustrated in FIG. 4, the result output unit 17 may output the person image ID identifying an image of a person, a time of capture of the image of the person, and the camera with which the image is captured. Alternatively, the result output unit 17 may output only a person image ID.”), based on at least one of weather information on a day, schedule information of each surveillance target person on the day, and a behavior tendency of each surveillance target person within a facility (“In the behavior analysis, for example, one camera or multiple cameras whose coverage areas overlap one another are used to recognize the location of a specific person and changes in the location of the person with time are tracked, thereby identifying where and how long the person stayed.” Koyama, [0004]; “Specifically, the behavior detection unit 25 compares a pattern that can be identified from appearance records stored in the appearance record storage unit 22 with the rules stored in the rule storage unit 24 and identifies the monitored subject that has the appearance record of the pattern that matches the rule.” Koyama, [0086]; [0069] suspicious behavior), (“The monitored subject who is making a movement deviating from normal movements of people who use a facility can be determined to be the monitored subject displaying suspicious behavior.” Koyama, [0071]; “rules specified by the relationships between two or more appearance records including image capture location and the image capture time are stored in the rule storage unit 24. The behavior detection unit 25 identifies the monitored subject that has the generated appearance record that matches any of the rules.” Koyama, [0084]); and searching for each of the surveillance target persons from among each of the camera images being determined as a target for which each of the surveillance target persons is searched (“The behavior detection unit 25 identifies the monitored subject that has the appearance record that matches a defined rule. Specifically, the behavior detection unit 25 compares a pattern that can be identified from appearance records stored in the appearance record storage unit 22 with the rules stored in the rule storage unit 24 and identifies the monitored subject that has the appearance record of the pattern that matches the rule... The result output unit 17 outputs the monitored subject identified by the behavior detection unit 25. The result output unit 17 may store output results in the output result storage unit 20. As illustrated in FIG. 4, the result output unit 17 may output the person image ID identifying an image of a person, a time of capture of the image of the person, and the camera with which the image is captured.” Koyama, [0086]-[0088]).
Regarding claim 13, Koyama discloses the image processing method according to claim 9, further comprising, determining, based on a result of the search, behavior made by each of the surveillance target persons, and generating behavior history information (“The appearance record storage unit 22 stores the appearance record of the monitored subject. Specifically, the appearance record storage unit 22 stores, for each the monitored subject, the appearance record in which items of information, including a time of image capture, are associated with the monitored subject. The time of image capture can be referred to as the time of appearance of the monitored subject.” Koyama, [0057]; i.e., the recorded image is determined based on the behavior information that includes recorded actions subject to behavior history information); and selecting (“The result output unit 17 outputs the monitored subject identified by the behavior detection unit 25.” Koyama, [0088]), based on the behavior history information, the camera image that satisfies a predetermined condition for [[the]] each surveillance target person (“The rule storage unit 24 stores rules defining patterns of behavior of monitored subjects to be extracted. As in the present example embodiment, in order to extract the monitored subject who is displaying suspicious behavior, the rule storage unit 24 stores a pattern of suspicious behavior identified from the appearance records as rules.” Koyama, [0069]; i.e., the suspicious behavior satisfies a predetermined condition, or rule, to detect and select the individual with suspicious behavior in the images).
Regarding claim 14, Koyama discloses the image processing method according to claim 13, wherein the behavior history information indicates a content of behavior (“Examples of rules to be defined will be described below. The monitored subject who is making a movement deviating from normal movements of people who use a facility can be determined to be the monitored subject displaying suspicious behavior.” Koyama, [0071]; i.e., rules provide a certain content behavior such as “a person repeatedly appears during a time of day in which usually people do not appear”, Koyama, [0081]), and a behavior execution time (“Specifically, the appearance record storage unit 22 stores, for each the monitored subject, the appearance record in which items of information, including a time of image capture, are associated with the monitored subject.” Koyama, [0057]), and wherein selecting the camera image photographed at a timing when predetermined behavior is made (“The result output unit 17 outputs the monitored subject identified by the behavior detection unit 25.” Koyama, [0088]; i.e., selected based on behavior, or rules).
Regarding claim 15, Koyama discloses a non-transitory storage medium storing a program causing a computer to (CPU of a computer, Koyama, [0089] and non-transitory program [0168]): acquire a plurality of camera images photographed by a plurality of cameras installed within a facility (“surveillance cameras in locations such as railroad stations and particular facilities and to analyze images captured with the surveillance cameras to perform various kinds of determination... one camera or multiple cameras” Koyama, [0003], [0004]); narrow down a camera image for which each of the surveillance target persons is searched, from among the plurality of camera images (“In the example illustrated in FIG. 2, the appearance record storage unit 22 associates and stores, for each person ID for identifying the monitored subject, a person image ID, which is an identifier identifying subject-image-capture-information, person identification information representing the subject-identification-information, the time of image capture, and the name of the camera that captured the image”, Koyama, [0058]; That is, the image of the target is shown as “person image ID” and is narrowed down based on the characteristic rules (e.g., behavior detection/tendency, see, for example, [0080]-[0082]) as explained at [0088] “The result output unit 17 outputs the monitored subject identified by the behavior detection unit 25. The result output unit 17 may store output results in the output result storage unit 20. FIG. 4 is a diagram illustrating an example of information stored in the output result storage unit 20. As illustrated in FIG. 4, the result output unit 17 may output the person image ID identifying an image of a person, a time of capture of the image of the person, and the camera with which the image is captured. Alternatively, the result output unit 17 may output only a person image ID.”), based on at least one of weather information on a day, schedule information of each surveillance target person on the day, and a behavior tendency of each surveillance target person within a facility (“In the behavior analysis, for example, one camera or multiple cameras whose coverage areas overlap one another are used to recognize the location of a specific person and changes in the location of the person with time are tracked, thereby identifying where and how long the person stayed.” Koyama, [0004]; “Specifically, the behavior detection unit 25 compares a pattern that can be identified from appearance records stored in the appearance record storage unit 22 with the rules stored in the rule storage unit 24 and identifies the monitored subject that has the appearance record of the pattern that matches the rule.” Koyama, [0086]; [0069] suspicious behavior), (“The monitored subject who is making a movement deviating from normal movements of people who use a facility can be determined to be the monitored subject displaying suspicious behavior.” Koyama, [0071]; “rules specified by the relationships between two or more appearance records including image capture location and the image capture time are stored in the rule storage unit 24. The behavior detection unit 25 identifies the monitored subject that has the generated appearance record that matches any of the rules.” Koyama, [0084]); and search for each of the surveillance target persons from among each of the camera images being determined as a target for which each of the surveillance target persons is searched (“The behavior detection unit 25 identifies the monitored subject that has the appearance record that matches a defined rule. Specifically, the behavior detection unit 25 compares a pattern that can be identified from appearance records stored in the appearance record storage unit 22 with the rules stored in the rule storage unit 24 and identifies the monitored subject that has the appearance record of the pattern that matches the rule... The result output unit 17 outputs the monitored subject identified by the behavior detection unit 25. The result output unit 17 may store output results in the output result storage unit 20. As illustrated in FIG. 4, the result output unit 17 may output the person image ID identifying an image of a person, a time of capture of the image of the person, and the camera with which the image is captured.” Koyama, [0086]-[0088]).
Regarding claim 19, Koyama discloses the non-transitory storage medium according to claim 15, wherein the program causing the computer to: determine, based on a result of the search, behavior made by each of the surveillance target persons, and generate behavior history information (“The appearance record storage unit 22 stores the appearance record of the monitored subject. Specifically, the appearance record storage unit 22 stores, for each the monitored subject, the appearance record in which items of information, including a time of image capture, are associated with the monitored subject. The time of image capture can be referred to as the time of appearance of the monitored subject.” Koyama, [0057]; i.e., the recorded image is determined based on the behavior information that includes recorded actions subject to behavior history information); and select (“The result output unit 17 outputs the monitored subject identified by the behavior detection unit 25.” Koyama, [0088]), based on the behavior history information, the camera image that satisfies a predetermined condition for [[the]] each surveillance target person (“The rule storage unit 24 stores rules defining patterns of behavior of monitored subjects to be extracted. As in the present example embodiment, in order to extract the monitored subject who is displaying suspicious behavior, the rule storage unit 24 stores a pattern of suspicious behavior identified from the appearance records as rules.” Koyama, [0069]; i.e., the suspicious behavior satisfies a predetermined condition, or rule, to detect and select the individual with suspicious behavior in the images).
Regarding claim 20, Koyama discloses the non-transitory storage medium according to claim 19, wherein the behavior history information indicates a content of behavior (“Examples of rules to be defined will be described below. The monitored subject who is making a movement deviating from normal movements of people who use a facility can be determined to be the monitored subject displaying suspicious behavior.” Koyama, [0071]; i.e., rules provide a certain content behavior such as “a person repeatedly appears during a time of day in which usually people do not appear”, Koyama, [0081]), and a behavior execution time (“Specifically, the appearance record storage unit 22 stores, for each the monitored subject, the appearance record in which items of information, including a time of image capture, are associated with the monitored subject.” Koyama, [0057]), and wherein the program causing the computer to select the camera image photographed at a timing when predetermined behavior is made (“The result output unit 17 outputs the monitored subject identified by the behavior detection unit 25.” Koyama, [0088]; i.e., selected based on behavior, or rules).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 2-3, 10-11 and 16-17 are rejected under 35 U.S.C. 103 as being unpatentable over Koyama in view of Hirasawa et al. (US 20200404222 A1).
Regarding claim 2, Koyama discloses all of the subject matter as described above except for specifically teaching determine, based on a result of the search, [[the]] a camera, of the cameras, used to be photographing the specified surveillance target person, and transmit, to an external apparatus, the camera image photographed by the determined camera. However, Hirasawa in the same field of endeavor teaches determine, based on a result of the search, [[the]] a camera, of the cameras, used to be photographing the specified surveillance target person, and transmit, to an external apparatus, the camera image photographed by the determined camera (“the captured images from camera 1 may be transmitted to the head office or the management facility of the cloud computing system, and the captured images from camera 1 may be accumulated in the device installed therein.” Hirasawa, [0193]).
Therefore, it would have been obvious to one of ordinary skill in the art to combine Koyama and Hirasawa before the effective filing date of the claimed invention. The motivation for this combination of references would have been to display the image of the person designated by the monitoring person as the tracking target in the image display window so a monitoring person can check what types of actions a person performs (Hirasawa, [0002]). This motivation for the combination of Koyama and Hirasawa is supported by KSR exemplary rationale (G) Some teaching, suggestion, or motivation in the prior art that would have led one of ordinary skill to modify the prior art reference or to combine prior art reference teachings to arrive at the claimed invention. MPEP 2141 (III).
Regarding claim 3, Koyama and Hirasawa disclose the image processing apparatus according to claim 1, wherein the processor is further configured to execute the one or more instructions to determine, based on a result of the search, [[the]] a camera, of the cameras, used to be photographing the specified surveillance target person, in a case where a plurality of the cameras are determined, output candidate information indicating the determined plurality of the cameras, accept an input of specifying one from among the determined plurality of the cameras (“a tracking target setter that displays on the display device, a tracking target search screen in which thumbnail images of respective moving objects are displayed as a list, and in response to an operation input by a monitoring person designating a moving object to be tracked by selecting the thumbnail image” Hirasawa, [0049]), and transmit, to an external apparatus, the camera image photographed by the specified camera (“the captured images from camera 1 may be transmitted to the head office or the management facility of the cloud computing system, and the captured images from camera 1 may be accumulated in the device installed therein.” Hirasawa, [0193]).
Therefore, combining Koyama and Hirasawa would meet the claim limitations for the same reasons as previously discussed in claim 2.
Regarding claim 10, Koyama and Hirasawa disclose the image processing method according to claim 9, further comprising, determining, based on a result of the search, [[the]] a camera, of the cameras, being presumed used to be photographing the specified surveillance target person, and transmitting, to an external apparatus, the camera image photographed by the determined camera (“the captured images from camera 1 may be transmitted to the head office or the management facility of the cloud computing system, and the captured images from camera 1 may be accumulated in the device installed therein.” Hirasawa, [0193]).
Therefore, combining Koyama and Hirasawa would meet the claim limitations for the same reasons as previously discussed in claim 2.
Regarding claim 11, Koyama and Hirasawa disclose the image processing method according to claim 9, further comprising, determining, based on a result of the search, [[the]] a camera, of the cameras, being presumed used to be photographing the specified surveillance target person, in a case where a plurality of the cameras are determined, outputting candidate information indicating the determined plurality of the cameras (“a tracking target setter that displays on the display device, a tracking target search screen in which thumbnail images of respective moving objects are displayed as a list, and in response to an operation input by a monitoring person designating a moving object to be tracked by selecting the thumbnail image” Hirasawa, [0049]), accepting an input of specifying one from among the determined plurality of the cameras, and transmitting, to an external apparatus, the camera image photographed by the specified camera (“the captured images from camera 1 may be transmitted to the head office or the management facility of the cloud computing system, and the captured images from camera 1 may be accumulated in the device installed therein.” Hirasawa, [0193]).
Therefore, combining Koyama and Hirasawa would meet the claim limitations for the same reasons as previously discussed in claim 2.
Regarding claim 16, Koyama and Hirasawa disclose the non-transitory storage medium according to claim 15, wherein the program causing the computer to determine, based on a result of the search, [[the]] a camera, of the cameras, being presumed used to be photographing the specified surveillance target person, and transmit, to an external apparatus, the camera image photographed by the determined camera (“the captured images from camera 1 may be transmitted to the head office or the management facility of the cloud computing system, and the captured images from camera 1 may be accumulated in the device installed therein.” Hirasawa, [0193]).
Therefore, combining Koyama and Hirasawa would meet the claim limitations for the same reasons as previously discussed in claim 2.
Regarding claim 17, Koyama and Hirasawa disclose the non-transitory storage medium according to claim 15, wherein the program causing the computer to determine, based on a result of the search, [[the]] a camera, of the cameras, being presumed used to be photographing the specified surveillance target person, in a case where a plurality of the cameras are determined, output candidate information indicating the determined plurality of the cameras (“a tracking target setter that displays on the display device, a tracking target search screen in which thumbnail images of respective moving objects are displayed as a list, and in response to an operation input by a monitoring person designating a moving object to be tracked by selecting the thumbnail image” Hirasawa, [0049]), accept an input of specifying one from among the determined plurality of the cameras, and transmit, to an external apparatus, the camera image photographed by the specified camera (“the captured images from camera 1 may be transmitted to the head office or the management facility of the cloud computing system, and the captured images from camera 1 may be accumulated in the device installed therein.” Hirasawa, [0193]).
Therefore, combining Koyama and Hirasawa would meet the claim limitations for the same reasons as previously discussed in claim 2.
Claim(s) 4, 12 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Koyama in view of Hirasawa et al. and in further view of Padmanabhan et al. (US 20190332848 A1).
Regarding claim 4, the combination of Koyama and Hirasawa as whole does not expressly disclose ranking of candidate information based on a size of the surveillance target person within the camera image. However, Padmanabhan in the same field of endeavor teaches display a ranking of the candidate information is performed, based on a size of the surveillance target person within the camera image (“In some cases, facial images for a particular individual may be organized by the size of the facial image in pixels” Padmanabhan, [0043])(“capture facial images of individuals within a space so that they can be identified... In some instances, the video capture module 50 may be operably coupled to one or more still cameras and/or video cameras that are distributed within a space.” Padmanabhan, [0035]).
Therefore, it would have been obvious to one of ordinary skill in the art to combine Koyama and Hirasawa with Padmanabhan before the effective filing date of the claimed invention. The motivation for this combination of references would have been to implement facial recognition system in a monitoring setting for a building or facility (Padmanabhan, [0002]). This motivation for the combination of Koyama, Hirasawa and Padmanabhan is supported by KSR exemplary rationale (G) Some teaching, suggestion, or motivation in the prior art that would have led one of ordinary skill to modify the prior art reference or to combine prior art reference teachings to arrive at the claimed invention. MPEP 2141 (III).
Regarding claim 12, Koyama, Hirasawa and Padmanabhan the image processing method according to claim 11, further comprising, display a ranking of the candidate information, based on a size of the surveillance target person within the camera image (“In some cases, facial images for a particular individual may be organized by the size of the facial image in pixels” Padmanabhan, [0043])(“capture facial images of individuals within a space so that they can be identified... In some instances, the video capture module 50 may be operably coupled to one or more still cameras and/or video cameras that are distributed within a space.” Padmanabhan, [0035]).
Therefore, combining Koyama, Hirasawa and Padmanabhan would meet the claim limitations for the same reasons as previously discussed in claim 4.
Regarding claim 18, Koyama, Hirasawa and Padmanabhan the non-transitory storage medium according claim 17, wherein the program causing the computer to display a ranking of the candidate information (“In some cases, facial images for a particular individual may be organized by the size of the facial image in pixels” Padmanabhan, [0043])(“capture facial images of individuals within a space so that they can be identified... In some instances, the video capture module 50 may be operably coupled to one or more still cameras and/or video cameras that are distributed within a space.” Padmanabhan, [0035]).
Therefore, combining Koyama, Hirasawa and Padmanabhan would meet the claim limitations for the same reasons as previously discussed in claim 4.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Inquiries
Any inquiry concerning this communication or earlier communications from the examiner should be directed to EMMANUEL SILVA-AVINA whose telephone number is (571)270-0729. The examiner can normally be reached Monday - Friday 11 AM - 8 PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chineyere Wills-Burns can be reached at (571) 272-9752. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/EMMANUEL SILVA-AVINA/Examiner, Art Unit 2673
/CHINEYERE WILLS-BURNS/Supervisory Patent Examiner, Art Unit 2673