DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant's arguments and amendments received November 25, 2025 have been fully considered. With regard to 35 U.S.C. § 102, Applicant argues that the cited prior art does not disclose [see applicant argument pages 5-6]. This language corresponds to the newly amended language of claims 2-10, specifically to claim 2.
As such, these have been considered but they are not persuasive as addressed below. See the rejection how the art on record reads on the claimed invention as well as the examiner's interpretation of the cited art in view of the presented claim set as outlined below. Furthermore,
Kocamaz teaches:
[0121] Cameras with a field of view that include portions of the environment to the side of the vehicle 700 (e.g., side-view cameras) may be used for surround view, providing information used to create and update the occupancy grid, as well as to generate side impact collision warnings. For example, surround camera(s) 774 (e.g., four surround cameras 774 as illustrated in FIG. 7B) may be positioned to on the vehicle 700. The surround camera(s) 774 may include wide-view camera(s) 770, fisheye camera(s), 360 degree camera(s), and/or the like. Four example, four fisheye cameras may be positioned on the vehicle's front, rear, and sides. In an alternative arrangement, the vehicle may use three surround camera(s) 774 (e.g., left, right, and rear), and may leverage one or more other camera(s) (e.g., a forward-facing camera) as a fourth surround view camera.
Kocamaz, 0042-0043, 0121, emphasis added.
As outlined above, Kocamaz teaches an impact collision warning (event warning) using the object detection, and camera was designated as priority of camera based on event detection. Kocamar cameras are capable detecting or monitory 360 degree or fish eye field view. The claimed invention “sensing intrusion into the cabin of the vehicle” is similar to the system of Kocamaz “detecting impact collision warning into the vehicle” since the act of intrusion produce collision to the vehicle. As such, the examiner stands with the rejection. Applicant argument in regarding the newly added claims, see rejection as outlined below.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 2-10 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Kocamaz et al. US 2024/0420449.
In regarding to claim 2 Kocamaz teaches:
2. An information processing device installed in a vehicle that is equipped with a plurality of cameras, the information processing device comprising: a control unit configured to execute deciding, in accordance with a predetermined event occurring while the vehicle is parked,
[0042] To generate output objects, tracking engine 124 iterates over the object subsets 212 (where each object subset 212 corresponds to a set of input objects having the same global object ID) and generates an output object for each object subset 212. For each object subset 212, tracking engine 124 identifies a highest-priority camera of the cameras associated with input objects in the subset. Each camera may have a default priority, e.g., a fixed priority value associated with the camera. Further, camera priority may be determined based on the scene. If an object is partially visible in one camera but completely visible in another camera, then a lower priority is assigned to the camera in which the object is partially visible, and a higher priority is assigned to the camera in which the object is completely visible. Camera priority may be determined based on characteristics of the camera and/or characteristics of the input object. A camera may be associated with a distance capability indicating whether the camera is suitable for capturing long distance, medium distance, or short distance objects. Camera priority may then be determined based on the distance capability of the camera and/or the distance of an input object. If the distance capability of a camera corresponds to the distance of an input object, then a high priority may be assigned to the camera. For example, a long-range camera may be assigned a high priority for an input object that is at a long distance from the camera, a medium priority for a camera that is at a medium distance from the camera, or a low priority for an object that is at a short distance from the camera.
Kocamaz, 0042-0043, 0122, emphasis added.
an order of priority of a plurality of pieces of recorded data recorded by the cameras, and uploading the recorded data to a predetermined server in an order in accordance with the order of priority,
Kocamaz, 0042-0043, 0029, 0127
wherein the cameras include a first camera for shooting forward of the vehicle, a second camera for shooting rearward of the vehicle, a third camera for shooting a vicinity of the vehicle, and a fourth camera for shooting inside a cabin of the vehicle, and when the predetermined event is sensing of intrusion into the cabin of the vehicle, the control unit decides the order of priority of the recorded data of the fourth camera to be the highest priority.
[0042] To generate output objects, tracking engine 124 iterates over the object subsets 212 (where each object subset 212 corresponds to a set of input objects having the same global object ID) and generates an output object for each object subset 212. For each object subset 212, tracking engine 124 identifies a highest-priority camera of the cameras associated with input objects in the subset. Each camera may have a default priority, e.g., a fixed priority value associated with the camera. Further, camera priority may be determined based on the scene. If an object is partially visible in one camera but completely visible in another camera, then a lower priority is assigned to the camera in which the object is partially visible, and a higher priority is assigned to the camera in which the object is completely visible. Camera priority may be determined based on characteristics of the camera and/or characteristics of the input object. A camera may be associated with a distance capability indicating whether the camera is suitable for capturing long distance, medium distance, or short distance objects. Camera priority may then be determined based on the distance capability of the camera and/or the distance of an input object. If the distance capability of a camera corresponds to the distance of an input object, then a high priority may be assigned to the camera. For example, a long-range camera may be assigned a high priority for an input object that is at a long distance from the camera, a medium priority for a camera that is at a medium distance from the camera, or a low priority for an object that is at a short distance from the camera.
[0121] Cameras with a field of view that include portions of the environment to the side of the vehicle 700 (e.g., side-view cameras) may be used for surround view, providing information used to create and update the occupancy grid, as well as to generate side impact collision warnings. For example, surround camera(s) 774 (e.g., four surround cameras 774 as illustrated in FIG. 7B) may be positioned to on the vehicle 700. The surround camera(s) 774 may include wide-view camera(s) 770, fisheye camera(s), 360 degree camera(s), and/or the like. Four example, four fisheye cameras may be positioned on the vehicle's front, rear, and sides. In an alternative arrangement, the vehicle may use three surround camera(s) 774 (e.g., left, right, and rear), and may leverage one or more other camera(s) (e.g., a forward-facing camera) as a fourth surround view camera.
Kocamaz, 0042-0043, 0121, emphasis added.
In regarding to claim 3 Kocamaz teaches:
3. The information processing device according to claim 2, wherein, when the predetermined event is sensing of impact, the control unit decides the order of priority of the recorded data of the third camera to be the highest priority.
Kocamaz, 0042-0043
In regarding to claim 4 Kocamaz teaches:
4. The information processing device according to claim 2, wherein the vehicle includes a multi-axis acceleration sensor, and when the predetermined event is sensing of impact, the control unit executes determining a direction from which the vehicle was subjected to an impact based on a detection signal of the multi-axis acceleration sensor, deciding the order of priority of the recording data of the first camera to be the highest priority when determination is made that the vehicle was subjected to an impact from a front, deciding the order of priority of the recording data of the second camera to be the highest priority when determination is made that the vehicle was subjected to an impact from a rear, and deciding the order of priority of the recording data of the third camera to be the highest priority when determination is made that the vehicle was subjected to an impact from a side.
[0121] Cameras with a field of view that include portions of the environment to the side of the vehicle 700 (e.g., side-view cameras) may be used for surround view, providing information used to create and update the occupancy grid, as well as to generate side impact collision warnings. For example, surround camera(s) 774 (e.g., four surround cameras 774 as illustrated in FIG. 7B) may be positioned to on the vehicle 700. The surround camera(s) 774 may include wide-view camera(s) 770, fisheye camera(s), 360 degree camera(s), and/or the like. Four example, four fisheye cameras may be positioned on the vehicle's front, rear, and sides. In an alternative arrangement, the vehicle may use three surround camera(s) 774 (e.g., left, right, and rear), and may leverage one or more other camera(s) (e.g., a forward-facing camera) as a fourth surround view camera.
Kocamaz, 0042-0043, 0121, 0185-0186, emphasis added.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 5-9 are rejected under 35 U.S.C. 103 as being unpatentable over Kocamaz et al. US 2024/0420449 as applied to claims 2-4 above, and further in view of Shimomura et al. US 2005/0099273.
In regarding to claim 5 Kocamaz teaches:
5. (New) The information processing device according to claim 2,
however, Kocamaz fails to explicitly teach, but Shimomura teaches: wherein when the vehicle is parked, a drive system of the vehicle is stopped, door of the vehicle is locked, and an occupant of the vehicle has got off, and the control unit determines whether the occupant has finished getting off according to at least one of an image-capturing data of the fourth camera, a detection signal of a seating sensor in the cabin of the vehicle, or a determination as to whether or not a smart key for the vehicle does not exist in the vehicle.
Shimomura, 0019-0023, 0026
Accordingly, it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Shimomura with the system of Kocamaz in order wherein when the vehicle is parked, a drive system of the vehicle is stopped, door of the vehicle is locked, and an occupant of the vehicle has got off, and the control unit determines whether the occupant has finished getting off according to at least one of an image-capturing data of the fourth camera, a detection signal of a seating sensor in the cabin of the vehicle, or a determination as to whether or not a smart key for the vehicle does not exist in the vehicle, as such, capturing an image of potential intruder around the vehicle by the camera upon detection of an abnormality of the vehicle can improve the usefulness of the vehicle intrusion monitoring system..—0007.
In regarding to claim 6 Kocamaz and Shimomura teaches:
6. (New) The information processing device according to claim 5, furthermore, Shimomura teaches: wherein when the predetermined event is sensing of the intrusion of a moving object into the cabin of the vehicle, the control unit determines the intrusion of the moving object into the cabin of the vehicle according to whether or not an unlock of a door of the vehicle by an improper method is sensed.
Shimomura, 0019-0023, 0026
In regarding to claim 7 Kocamaz and Shimomura teaches:
7. (New) The information processing device according to claim 6, furthermore, Shimomura teaches: wherein the control unit determines the intrusion of the moving object into the cabin of the vehicle when an intrusion sensor other than the cameras detects the moving object that has entered the cabin of the vehicle.
Shimomura, 0019-0023, 0026
In regarding to claim 8 Kocamaz and Shimomura teaches:
8. (New) The information processing device according to claim 7, furthermore, Kocamaz teaches: wherein the intrusion sensor is an infrared sensor or an ultrasonic sensor.
Kocamaz, 0111
In regarding to claim 9 Kocamaz and Shimomura teaches:
9. (New) The information processing device according to claim 8, furthermore, Kocamaz teaches: wherein the order of priority when the predetermined event is the intrusion of the moving object into the cabin of the vehicle is set in an order of
1) the recorded data of the fourth camera,
2) the recorded data of the third camera,
3) the recorded data of the first camera, and
4) the recorded data of the second camera.
Kocamaz, 0042-0043
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim 10 are rejected under 35 U.S.C. 103 as being unpatentable over Kocamaz et al. US 2024/0420449 and Shimomura et al. US 2005/0099273 as applied to claims 5-9 above, and further in view of Tabata US 2020/0369319
In regarding to claim 10 Kocamaz and Shimomura teaches:
10. (New) The information processing device according to claim 9, however, Kocamaz and Shimomura fails to explicitly teach, but Tabata teaches: wherein a recording time by the cameras is a predetermined time length set in advance, and when the predetermined event is the intrusion of the moving object into the cabin of the vehicle, the recording time is changed so that to perform recording until the moving object leaves the cabin of the vehicle.
Tabata, 0025 and 0037.
Accordingly, it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Tabata with the system of Kocamaz and Shimomura in order wherein a recording time by the cameras is a predetermined time length set in advance, and when the predetermined event is the intrusion of the moving object into the cabin of the vehicle, the recording time is changed so that to perform recording until the moving object leaves the cabin of the vehicle, as such, in the configuration in which the recording is started after an event is detected, neither the video data at the time of the event detection nor that before the event detection is recorded. These factors may obstruct appropriate understanding of the situation in which the event is detected, based on the video data and recording before, during and after event detection simplify understanding of the video data during event detection.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DANIEL T TEKLE whose telephone number is (571)270-1117. The examiner can normally be reached Monday-Friday 8:00-4:30 ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William Vaughn can be reached at 571-272-3922. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DANIEL T TEKLE/Primary Examiner, Art Unit 2481