DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This Office Action is in response to Applicant’s amendment filed 10/30/2025. Claims 1-5 are currently pending in this application.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-5 are rejected under 35 U.S.C. 103 as being unpatentable over Fujita (JP 2015-170141 A, with reference to the machine translation provided herewith).
Claim 1, Fujita teaches:
A system (Fujita, Fig. 1) comprising:
a camera (Fujita, Fig. 1: 100, Paragraphs [0036-0037], An image capturing device 1 includes a camera (see Fujita, Paragraph [0040]).) configured to generate an image of an imaging area (Fujita, Paragraph [0037], The image capturing device 1 captures an image of the surroundings of the vehicle, which is an imaging area.); and
a terminal (Fujita, Fig. 1: 2) that is communicable with the camera (Fujita, Paragraph [0037], The image capturing device 1 transmits the captured image to the control device 2 via network 4.),
the terminal being configured to designate a detection area, in which an object is to be detected, based on a user input, the detection area being an area within the imaging area of the image generated by the camera (Fujita, Paragraphs [0038-0039], The display screen in combination with the built-in program of the control device 2 which enables the operator to arbitrarily set a designated range of the monitored object within the displayed shooting range. The designated range is functionally equivalent to a detection area, based on user input.),
the camera being configured to: extract the detection area of the image, detect whether an object is present in the detection area of the image (Fujita, Paragraph [0041], The detection means for detecting the intrusion of a person or another vehicle within a set specified range may reside in the photographing device 1. The capability of the photographing device 1 to identify a person or another vehicle within a set specified range, i.e. the detection area, is functionally equivalent to detecting whether an object is present in the detection area.), and
transmit a notification indicating detection of the object to another device (Fujita, Paragraph [0041], The portion of the camera that includes the known image processing program capable of realizing the function of the detection means is used to transmit intrusion information to the alarm device 3, i.e. another device. This portion of the camera is thus functionally equivalent to a transmission unit.).
Fujita does not explicitly teach:
Extract an image of the detection area of the image.
However, it would have been obvious to one of ordinary skill in the art, at the time of filing, for a photographing device 1 that uses image processing functions of the built-in program for detecting the intrusion of a person or another vehicle within the specified range (see Fujita, Paragraphs [0039-0041]) to be capable of extracting an image within an image in order to identify said intrusion. The intrusion of people or other vehicles within a designated range based on the image information is functionally equivalent to an intrusion of people or other vehicles within a designated area of the image(s) captured by the photographing device. Thus, “extracting” an are of an image that contains the designated area is equivalent to an image of the detection area of the image, and would not change the principal operation of the system as a whole.
Claim 2, Fujita further teaches:
The system according to claim 1, wherein the another device is configured to operate in accordance with the notification received from the camera (Fujita, Paragraph [0042], In response to receiving the intrusion information, the alarm device 3 issues an alarm. It is noted that the source of the intrusion information may be from the camera itself (see Fujita, Paragraph [0041]).).
Claim 3, Fujita further teaches:
The system according to claim 1, wherein when another condition different from the detection of the object is met, the camera refrains from sending the notification (Fujita, Paragraph [0041], When the image capturing device 1 is provided with a detection means, it transmits intrusion information to the alarm device 3 when it detects an intrusion. Thus, for example, prior to the detection of the intrusion, the image capturing device 1 refrains from sending the intrusion information based on the condition that no intrusion is detected, which is another condition different from the detection of the object.).
Claim 4, Fujita teaches:
A camera (Fujita, Fig. 1: 100, Paragraphs [0036-0037], An image capturing device 1 includes a camera (see Fujita, Paragraph [0040]).) comprising:
an image sensor (Fujita, Paragraphs [0036-0037] and [0040], The portion of the imaging/photographing device 1 responsible for capturing the image is functionally equivalent to an image sensor.) configured to generate an image of an imaging area (Fujita, Paragraph [0037], The image capturing device 1 captures an image of the surroundings of the vehicle, which is an imaging area.);
a processor (Fujita, Paragraphs [0039-0040], The camera has a built-in image processing program.) configured to:
set a detection area, in which an object is to be detected, in the generated image, a communicable terminal (Fujita, Paragraph [0040], The setting means for setting the designated range of the surveillance target within the range photographed by the photographing device 1 may be provided in the photographing device 1. The designated range is set based on the operator’s operation information transmitted from the control device 2, i.e. a communicable terminal.), the detection area being an area within the imaging area of the image generated by the camera (Fujita, Paragraphs [0038-0039], The display screen in combination with the built-in program of the control device 2 which enables the operator to arbitrarily set a designated range of the monitored object within the displayed shooting range. The designated range is functionally equivalent to a detection area, based on user input.),
extract the detection area of the image (Fujita, Paragraph [0041], The detection means for detecting the intrusion of a person or another vehicle within a set specified range may reside in the photographing device 1. The capability of the photographing device 1 to identify a person or another vehicle within a set specified range, i.e. the detection area, is functionally equivalent to detecting whether an object is present in the detection area.), and
detect whether an object is present in the detection area of the image (Fujita, Paragraph [0041], The detection means for detecting the intrusion of a person or another vehicle within a set specified range may reside in the photographing device 1.); and
a network module configured to transmit a notification indicating a detection of the object to another device (Fujita, Paragraph [0041], The portion of the camera that includes the known image processing program capable of realizing the function of the detection means is used to transmit intrusion information to the alarm device 3, i.e. another device. This portion of the camera is thus functionally equivalent to a transmission unit.).
Fujita does not specifically teach:
Extract an image of the detection area of the image.
However, it would have been obvious to one of ordinary skill in the art, at the time of filing, for a photographing device 1 that uses image processing functions of the built-in program for detecting the intrusion of a person or another vehicle within the specified range (see Fujita, Paragraphs [0039-0041]) to be capable of extracting an image within an image in order to identify said intrusion. The intrusion of people or other vehicles within a designated range based on the image information is functionally equivalent to an intrusion of people or other vehicles within a designated area of the image(s) captured by the photographing device. Thus, “extracting” an area of an image that contains the designated area is equivalent to an image of the detection area of the image, and would not change the principal operation of the system as a whole.
Claim 5, Fujita further teaches:
The system according to claim 2, wherein when another condition different from the detection of the object is met, the camera refrains from sending the notification (Fujita, Paragraph [0041], When the image capturing device 1 is provided with a detection means, it transmits intrusion information to the alarm device 3 when it detects an intrusion. Thus, for example, prior to the detection of the intrusion, the image capturing device 1 refrains from sending the intrusion information based on the condition that no intrusion is detected, which is another condition different from the detection of the object.).
Response to Arguments
Applicant's arguments filed 10/30/2025 have been fully considered but they are not persuasive.
In response to the Applicant’s argument that the Fujita reference fails to teach that the camera itself can detect the objects, the Examiner respectfully disagrees. The system of Fujita utilizes an image processing program to determining whether an intrusion of people or other vehicles occurs within a designated range (see Fujita, Paragraphs [0038-0041]). Fujita further discloses that the image processing program may be located in the control device 2 or the photographing device 1 itself.
The Examiner further notes that Applicant’s amendment to independent claims 1 and 4 to recite the step of “extract an image of the detection area of the image” changed the scope of the claims, and required a new grounds of rejection.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAMES J YANG whose telephone number is (571)270-5170. The examiner can normally be reached 9:30am-6:00p M-F.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, BRIAN ZIMMERMAN can be reached at (571) 272-3059. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JAMES J YANG/ Primary Examiner, Art Unit 2686