Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
2. This is the initial Office Action based on the application filed on November 26, 2024. The Examiner acknowledges the following:
3. Claims 1 – 5 were initially filed.
4.. The specification was amended on the same date as to include the cross-related patent applications related to the present application.
5. The drawings filed on 11/26/2024 are accepted by the Examiner.
6. Current claims 1 – 5 are pending and they are being considered for examination.
Information Disclosure Statement
7. The IDS document filed on filed on 11/26/2024 is acknowledged by the Examiner.
Priority
8. Priority data is based on a Japanese patent application JP- 2023-204078 of 12/01/2023. Certified copies were filed to the office on11/09/2025.
Claim Rejections - 35 USC § 112
9. The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Regarding Claim 1:
Claim 1 discloses “a camera control device comprising a processor configured to:
shoot by using a camera when a first condition is satisfied; and switch a power supply of the camera from OFF to ON when a second condition which differs from the first condition is satisfied, wherein the second condition is satisfied is when it is predicted that it becomes a scene in which the camera is used”.
How the processor can use a camera to shoot/capture or photograph when a first condition is satisfied? What is the first condition? What constitutes the first condition? Is it when the power supply of the camera is turned ON? The claim language discloses that when power supply of the camera is switched from OFF to ON, a second condition, different from the first one is satisfied. How can the camera or the processor use the camera to shoot when power is OFF? The claim language only turn the power on after the processor use the camera to shoot. It does not make any sense how claim 1 can happen. What is the first condition and the second condition? How the processor can do anything before the whole system is ON and even the processor itself is ON?
How the predicted “scene” is achieved? What is the predicted scene or a prediction to get into it? Is it that the system with the camera is connected to a database which includes past images captured and the processor using the camera can use image recognition as to identify a scene, a background or the like, so a user or the processor can set the camera to capture a similar scene or the camera can be set in the proper conditions as to achieve similar results?
For example, Fig 4 shows that in order to capture anything, the camera power supply has to be ON. The specification does not provide how claim 1 happens. The specification teaches an on-board camera in a vehicle and that is not in the independent claim 1. Claim 2 adds a third condition that is satisfied when the predicted is no longer the scene in which the camera is used. What is it? A different background, a different field-of-view FOV for the camera, etc. ? Claim 3 discloses that the camera control device and the camera are mounted on a vehicle but not detailed where or how that is done.
Therefore, “a processor configured shoot by using a camera when a first condition is satisfied and then switch the power supply from OFF to ON” was not described in the specification in such a way as to convey the one person skilled in the art that the inventor or joint inventor or for pre-AIA the inventor(s), at the time of the filing That the inventor had possession of the claimed invention. Therefore, claim 1 is rejected under 35 U.S.C 112(a) as for failing to comply with enablement requirement, since claim 1 includes subject matter which was not described in the specification in such a way as to enable the one skilled in the art to which it pertains, or which it is the most connected, to make and/or use of the invention.
Claim Rejection under 35 U.S.C. 112(b)
10. The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Regarding Claims 1 – 5:
Claims 1 recites “ a camera control device comprising a processor configured to: shoot by using a camera when a first condition is satisfied; and switch a power supply of the camera from OFF to ON when a second condition which differs from the first condition is satisfied, wherein the second condition is satisfied is when it is predicted that it becomes a scene in which the camera is used”.
How the processor can use a camera to shoot/capture or photograph when a first condition is satisfied? What is the first condition? What Applicant means for the first condition? Is it when the power supply of the camera is turned ON? The claim language discloses that when power supply of the camera is switched from OFF to ON, a second condition, different from the first one is satisfied. How can the camera or the processor use the camera to shoot when power is OFF? The claim language only turn the power on after the processor use the camera to shoot. It does not make any sense how claim 1 can happen. What is the first condition and the second condition? How the processor can do anything before the whole system is ON and even the processor itself is ON?
What about how the predicted “scene” is achieved? What is the predicted scene or a prediction as to get into it? Is it that the system with the camera is connected to a database which includes past images captured and the processor using the camera can use image recognition as to identify a scene, a background or the like, so a user or the processor can set the camera to capture a similar scene or the camera can be set in the proper conditions as to achieve similar results? Claim 2 is similar to claim 1. Claims 4 and 5 includes similar limitations as claim 1.
Claim 2 adds a third condition that is satisfied when the predicted is no longer the scene in which the camera is used. What is it? A different background, a different field-of-view FOV for the camera, etc. ?
What about claim 3, which writes that the camera control device and the camera are mounted on a vehicle but not detailed where or how that is done but it adds nothing to justify how claim 1 happens.
The language is quite confusing and it does not point out what the inventor(s) envision as their invention.
Claims 1 – 5 are rejected under 35 U.S.C. 112(b) as for failing to clearly disclose what Applicant is trying to pursue as his invention. The claim language is confusing, and it does not help the one with the ordinary skill in the art to be able to get anywhere based on the claim disclosure.
Claim Rejections - 35 USC § 102
11. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1, 2 4 and 5 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by “Yoav Grauer, US 2017/0256040 A1, hereinafter Grauer”.
Note: The rejection under 35 U.S.C. 112(b) is considered in the rejections below.
Regarding Claims 1, 2, 4 and 5:
Grauer teaches a system for image augmentation, the system comprising: at least one camera, configured to capture at least one image of a user in a scene at a first set of imaging parameters, such that the captured image comprises: a user portion, and a background portion with partial scene features cut off at the image frame borders due to limitations of said first set of imaging parameters; a database, comprising imagery of real-world environments; and a processor, communicatively coupled with said camera, and with said database, said processor configured to obtain at least a portion of said first set of imaging parameters-, the obtained imaging parameters comprising at least the position and orientation of said camera when capturing said image, said processor further configured to retrieve from said database at least one background-image in accordance with said obtained imaging parameters, the retrieved background-image captured at a second set of imaging parameters, different from said first set of imaging parameters, such that said background-image comprises supplementary scene features located beyond the image frame borders of said captured image and supplements said partial scene features of said background portion, said processor further configured to generate an updated image in which said user appears relative to a background comprising at least said supplementary scene features, by combining said captured image with said background-image.
Regarding Claim 1:
Grauer teaches,
A camera control device (Fig 1, system 100 with camera 104. See [0019; 0020]) comprising a processor (Fig 1, processor 108. See [0025]) configured to:
shoot by using a camera when a first condition is satisfied (Fig 6, step/procedure 202 captures a self-image of the user with a first condition, which is a background portion with a partial scene. See [0042]); and switch a power supply of the camera from OFF to ON (Fig 1, system 100 may be provided with a power supply for the various components. See [0026]) when a second condition which differs from the first condition is satisfied (Fig 6, step/procedure 204, to obtain imaging parameters and environmental conditions of the self-captured image. The second condition differing from the first can be imaging parameters or the environmental condition which is not the first condition. See [0043]), wherein the second condition is satisfied is when it is predicted that it becomes a scene in which the camera is used (Fig 2 shows the capturing of the user 102 with the statue of Liberty 135 (See [0027 – 0029]). Fig 3 shows the image captured with the predicted scene 140, wherein the camera captures an image based on the retrieved the background from the database that includes a partial scene features of the self-image as shown in Fig 6, step/procedures 212 – 214. See [0047; 0048]).
Regarding Claim 2:
The rejection of claim 1 is incorporated herein. As for claim 2 limitations, Grauer teaches a location measurement unit 114 which provides the real-world location of the camera 104 and user 102 and determines the global position and orientation coordinates of the camera and viewing angle of the imaged scene (See [0022; 0023]), which means that by changing the angle of view of the camera, the predicted acene is no longer the one the camera used previously. The third condition corresponds to the change of the angle of view of the camera.
Regarding Claim 4:
The rejection of claim 1 is incorporated herein. Claim 4 pertains to the method steps as for operating the camera control device of claim 1. In order to operate a camera control device as disclosed in claim 1, it would have necessitated to perform the method steps as disclosed in claim 4. Additionally, claim 4 includes similar limitations which were already discussed in the claim 1 rejection. As for a method for operating a system Grauer Fig 6 teaches it (See [0042 – 0050]). See claim 1 rejection for more details.
Regarding Claim 5:
The rejection of claims 1 and 4 is incorporated herein. Claim 5 limitations pertain to a non-transitory recording medium having recorded a computer program for causing a processor to execute a process as to perform the method steps as disclosed in claim 4 as to operate a camera control device as disclosed in claim 1. In order for the method of claim 4 be executed as to control/operate the camera control device of claim 1, it would have necessitated to have the method steps written on a computer program stored in a non-transitory computer readable medium. As for that matter, Grauer teaches that the system 100 may include an additional memory or storage (not shown) for temporary storage of image data or other types of data (See [0036; 0031]). As for that matter, it is well-known for a camera device to include a memory and a processor as for operating it. For example, Shimotono et al., US 2013/0063607 A1 (Art from the IDS) teaches a camera system that transfers static image data while reducing power consumption is provided. A static image application issues a frame transfer command at a certain transfer period to a camera module. In each transfer period, the camera module wakes up from a suspend state and generates static image data. After the end of the transfer, the camera module transitions to the suspend state. The camera module is able to transition to the suspend state at each transfer period and thus is able to reduce power consumption. Fig 1 shows the system with camera 100, CPU 11 and main memory 13. Fig 2 shows the system 100 includes MPU 115, RAM 117 and ROM 119 which can store a program run by MPU 115 as to operate the system (See [0057]).
Claim Rejections - 35 USC § 103
12. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103, which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over “Yoav Grauer, US 2017/0256040 A1, hereinafter Grauer” in view of “Terumoto Komori, US 2020/0047748 A1, hereinafter Komori”.
Note: The rejection under 35 U.S.C. 112(b) is considered in the rejection below.
Regarding Claim 3:
The rejection of claim 1 is incorporated herein. Grauer teaches a camera system that can capture images with a background and part of the subject which is the first condition and the system includes a power supply a to turn power ON/OFF and that can capture image information as environmental conditions which corresponds to the second condition; however, Grauer does not disclose that the camera is mounted on a vehicle, which in the same field of endeavor is taught by Komori.
As for “the camera and the camera control device are mounted on a vehicle”,
Komori teaches An object recognition device includes: a sensor; a storage device; and a processor configured to detect a first object and a second object around the vehicle, initially set, based on a detection position of the first object, an integration determination distance used to determine that the first object and the second object belong to the same object, estimate a traveling direction of the first object based on the sensor detection information, increase the integration determination distance along the traveling direction, after increasing the integration determination distance along the traveling direction, determine, based on the integration determination distance, whether the first object and the second object belong to the same object, and output the first object and the second object as the same object when it is determined that the first object and the second object belong to the same object. Fig 9 shows that the sensor device 20 includes a vehicle state sensor 22 and a surrounding situation sensor 21 that may include a camera which can detect a situation around the vehicle (See [0055; 0056]). The control device 100 is a microcomputer that controls the vehicle 1. The control device 100 includes a processor 110 and a storage device 120. The system can detect several objects T around the car and determine a range related to such objects positions in relation to the vehicle and depending on the objects it includes “integration conditions”, which may be related to the position and to the size of the object as seen in Fig 3 ([0042]). For example, Fig 10 shows the method which detects the object ( S 100), set integration determination distance for the detected object (S 200) which corresponds to a third condition. In step (S 300) the processor 110 performs the integration determination process and checks up whether the plurality of objects satisfies the integration condition (See [0061 – 0066]).
As for “a threshold value of vehicle speed for satisfying the second condition is greater than the threshold value of the vehicle speed for satisfying the first condition and less than the threshold value of the vehicle speed for satisfying the third condition”. (Komori Fig 19, step S240, the processor 110 compares the reliability of the absolute speed V with a predetermined threshold. When the reliability is equal to or higher than the threshold value (step S240: Yes), the processor 110 determines that the object T is a “high reliability object” and it estimates the travelling direction of the object
. In this case, the processing proceeds to steps S250 and S260A. Steps S250 and S260A are the same as those in the second example. That is, for a high reliability object, the processor 110 increases the increase amount of the integration determination distance DF along the traveling direction P according to the absolute speed V. On the other hand, when the reliability is lower than the threshold value (step S240; No), the processor 110 determines that the object T is a “low reliability object”. In this case, the processing proceeds to step S270. In step S270, the processor 110 prohibits increasing the integration determination distance DF for the low reliability object along the traveling direction P. In other words, the processor 110 does not increase the integration determination distance DF along the traveling direction P but maintains the initially set integration determination distance DF (See [0090 – 0092]). Nothing in Komori or Grauer precludes that the third condition be the vehicle speed.
By combining modifying Grauer with Komori teachings, that would benefit the imaging device to be able to recognize objects present in an image captured and to decide whether that was a predicted image/scene or not.
Conclusion
13. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
1. Y. Grauer, US 2017/0256040 A1 – it teaches method and system for image augmentation. A captured image of a user in a scene includes a user portion and a background portion with partial scene features cut off at the image frame borders. A first set of imaging parameters is obtained, including the position and orientation of the camera when capturing the image. A background-image is retrieved in accordance with the obtained imaging parameters, from a database that includes imagery of real-world environments. The retrieved background-image is captured at a second set of imaging parameters such that the background-image includes supplementary scene features located beyond the image frame borders of the captured image and supplementing information for the partial scene features of the background portion. An updated image is generated in which the user appears relative to a background that includes the supplementary scene features, by image fusion of the captured image with the background-image.
2. T. Komori, US 2020/0047748 A1 – it includes the same assignee and a different inventor. It teaches an object recognition device includes: a sensor; a storage device; and a processor configured to detect a first object and a second object around the vehicle, initially set, based on a detection position of the first object, an integration determination distance used to determine that the first object and the second object belong to the same object, estimate a traveling direction of the first object based on the sensor detection information, increase the integration determination distance along the traveling direction, after increasing the integration determination distance along the traveling direction, determine, based on the integration determination distance, whether the first object and the second object belong to the same object, and output the first object and the second object as the same object when it is determined that the first object and the second object belong to the same object.
Contact
14. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MARLY S.B. CAMARGO whose telephone number is (571)270-3729. The examiner can normally be reached on M-F 8:00-5:00 PM.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Lin Ye can be reached on 571-272-7372. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MARLY S CAMARGO/ Primary Examiner, Art Unit 2638