DETAILED ACTION
Notice of Pre-AIA or AIA Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Notice of Amendments
2. The Examiner acknowledges the amended claims filed on 12/13/2025.
- Claims 1, 8, 11, 13-14, 17, 19 and 20 have been amended.
- Claims 2 and 12 have been cancelled.
Response to Arguments
3. Applicant's arguments filed 12/13/2025 have been fully considered but they are not persuasive.
4. Regarding claim 1, Applicant argues that the combined teachings of Chiang and Kim fail to disclose “Responsive to determining that the output of the at least one accelerometer remains within the one or more threshold ranges during the motion detection time period, the mobile device processor activates the camera of the mobile device, displays a live stream image captured by the camera, starts a countdown timer, controls the camera to capture an image from the live stream upon expiration of the countdown time period, and stores an image file including the captured image”; see page 7, lines 23-27 and page 8, lines 1-2 of the Remarks. Specifically, Applicant argues that while Kim describes using an accelerometer for such purpose, Kim describes such use after the camera has been previously activated and has captured a face in the preview screen; see page 8, lines 23-27.
In response to Applicant’s position, the examiner would like to point out that the claims do not preclude additional steps of camera usage and face detection before analyzing if the accelerometer measurements are within range to then trigger a different camera usage as described in lines 10-19 of amended claim 1. The claims are not restricted to only and solely activating camera functionalities at a time subsequent from determining if accelerometer or movement outputs are within threshold ranges. That is, the camera can also be activated and execute additional and different functionalities at different times. Even if Chiang and Kim disclose more complex algorithms and additional camera processing and functionalities, that does not take away the fact that the combined teachings of Chiang and Kim still disclose that responsive to determining that the output of the at least one accelerometer remains within the one or more threshold ranges during the motion detection time period, the mobile device processor activates the camera of the mobile device, displays a live stream image captured by the camera, starts a countdown timer, controls the camera to capture an image from the live stream upon expiration of the countdown time period, and stores an image file including the captured image (See Chiang paragraphs 0026-0041; steps S53-S57, fig. 5; S73-S75, fig. 7 and Kim paragraphs 0131-0161; fig. 2B). That is, both references, Chiang and Kim execute live imaging, a timer and image capturing after determining that the measured motion is within a permissible range and the combined teachings provide executing a timer and image capturing in response to detecting that measurements from an accelerometer are within a predetermined range. Applicant is reminded that the references in question and their pertinent teachings must be considered entirely rather than just focusing on a narrower point. It appears that the Applicant is focusing on the image capturing and image processing that is provided before the motion detection and disregards that a timer, image capturing and storing are also provided in response to determining that detected motion is within a predetermined range. The claims are not limited to always enable camera functionalities in response to accelerometer outputs.
5. Regarding claims 3-11 and 13-20, the Applicant submits similar arguments as those presented for claim 1 above, therefore the response to arguments provided in section 4 are also applicable to claims 3-11 and 13-20.
Claim Rejections - 35 USC § 103
6. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
7. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
8. Claims 1, 3-5, 11, 13-14 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Chiang et al. (US-PGPUB 2012/0242818) in view of Kim et al. (US-PGPUB 2017/0180646).
Regarding claim 1, Chiang discloses a method for capturing an image by a mobile device (Camera phone 100; see paragraph 0025 and figs. 5, 7) that includes a processor (CPU 40; see fig. 1 and paragraph 0025) and motion detection (see paragraph 0032), the method comprising:
receiving, by the processor, a motion detection output (motion pattern detection; see paragraph 0032),
determining, by the processor (CPU 40; see fig. 1), whether the motion detection output remains within one or more threshold ranges during a motion detection time period (It is determined whether the electronic motion pattern is identical to a predetermined motion pattern defining an instruction of a functional operation of the digital camera, see step S53, fig. 5 and paragraph 0032);
responsive to determining that the motion detection output remains within the one or more threshold ranges during the motion detection time period (If YES at step S53, executing steps S54-S57; see fig. 5 and paragraph 0032. If YES at steps S71-S73, executing steps S74-S75; see fig. 7 and paragraph 0033. See flowchart and application of camera phone 100 in figures 5 and 7):
activating, by the processor, a camera of the mobile device (A predetermined manipulation manner of rotating the digital camera 100 in an angle back and forth (as shown in FIGS. 2a and 2b) can represent an instruction for taking a picture in a regular mode (one picture shooting at once); see paragraphs 0026, 0031, 0041);
displaying, by the processor, a live image stream captured by the camera on a display of the mobile device (Performing auto-focus/pre-procedure in the Preview Images; see steps S54-S55 in fig. 5 and step S74 in fig. 7, paragraphs 0032-0033);
starting, by the processor, a timer (Steps S71-S73 are governed by a timer, in other words, in order to trigger the auto-focus step S74, each of the steps S71, S72, and S73 has to be fulfilled within a predetermined time period; see paragraph 0033),
upon expiration of timer, automatically controlling the camera, by the processor, to capture an image from the live image stream to produce a captured image (If Yes at step S53, a message indicating that the functional operation is about to be performed is rendered on the displayer of the digital camera (step S54) and the requisite pre-procedure(s) for the functional operation is performed (step S55). Next, the functional operation is performed to retrieve an object image (step S56); see fig. 5 and paragraph 0032. Following the step S74, an image is captured and processed (step S75); see fig. 7 and paragraph 0033); and
storing an image file including the captured image in a memory of the mobile device (Finally, the retrieved image data is stored in a memory unit of the digital camera (step S57); see fig. 5 and paragraph 0032. Once the functional operation is performed to retrieve object images, the image data retrieved by the image sensor 10 is sent to the image processor 20 to be saved in the memory unit 30; see paragraph 0029).
However, Chiang does not explicitly disclose an accelerometer and a count-down.
On the other hand, Kim discloses receiving an output of the at least one accelerometer (The sensing unit 140 for sensing a moved degree of the body, includes an acceleration sensor is activated; see paragraphs 0130, 0135, 0136); determining by the processor weather the output of the at least one accelerometer remains within one or more threshold ranges during a motion detection time period (Detecting if a moved degree sensed by the sensing unit satisfies a preset capturing condition; see paragraph 0131); responsive to determining that the output of the at least one accelerometer remains within the one or more threshold ranges during the motion detection time period (The controller 180 of the mobile terminal 100 executes a timer capturing by determining it as a capturing time point and executes a consecutive capturing by generating a next capturing signal at a reference time interval, while the moved degree of the body is within a preset range. If it is determined that the moved degree of the mobile terminal does not satisfy the preset capturing condition or is out of the reference range, a timer capturing or a consecutive capturing is stopped; see paragraphs 0131, 0161, 0147): activating a camera and displaying, by the processor (Controller/Processor 180; see fig. 1A and paragraphs 0045, 0255), a live image stream captured by the camera on a display of the mobile device (A preview screen 201 is output to the display unit 151; see paragraph 0133 and fig. 2A); starting, by the processor, a timer to count down time for a countdown time period (Once a moved degree of the mobile terminal 100 satisfies a preset capturing condition, the controller 180 executes a timer capturing of the camera 121a. An image 212 which changes according to a timer driving may be output to the display unit 151; see paragraph 0140 and fig. 2B) and upon expiration of the countdown time period, automatically controlling the camera, by the processor, to capture an image from the live image stream to produce a captured image (When the timer expires, a shutter of the camera 121a is operated to capture the preview screen 201 output to the display unit 151; see paragraph 0142 and fig. 2B).
Since Chiang already teaches displaying a message indicating that a functional operation is about to be performed (step S54; see fig. 5), then it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Chiang and Kim to provide receiving an output of the at least one accelerometer, determining whether the output of the at least one accelerometer remains within one or more threshold ranges during a motion detection time period; responsive to determining that the output of the at least one accelerometer remains within the one or more threshold ranges during the motion detection time period: activating a camera, displaying a live image stream, starting a timer, upon expiration of the countdown time period automatically capturing an image for the purpose of providing an alternative/backup method of measuring motion while easily assisting and alerting the user in image preparation before the shot is taken.
Regarding claim 3, Chiang and Kim disclose everything claimed as applied above (see claim 1). However, Chiang fails to disclose displaying, by the processor, a time of the timer on the display of the mobile device during the countdown time period.
Nevertheless, Kim discloses displaying, by the processor, a time of the timer on the display of the mobile device during the countdown time period (An image 212 which changes according to a timer driving is output to the display unit 151; see paragraph 0140, 0178 and figs. 2B, 6).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Chiang and Kim to provide displaying, by the processor, a time of the timer on the display of the mobile device during the countdown time period for the purpose of improving image quality by easily assisting and alerting the user in image preparation before the shot is taken.
Regarding claim 4, Chiang and Kim disclose everything claimed as applied above (see claim 3). However, Chiang fails to disclose displaying a time of the timer during the countdown time period includes displaying the time of the timer on a per second basis during the countdown time period.
On the other hand, Kim discloses displaying a time of the timer during the countdown time period includes displaying the time of the timer on a per second basis during the countdown time period (An image 212 which changes according to a timer driving is output to the display unit 151. If the re-timer capturing is executed within a predetermined time after the timer capturing is cancelled, the timer driving may be consecutively executed from a time point when the timer driving has been stopped (e.g., 2 seconds); see paragraph 0140, 0178 and figs. 2B, 6).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Chiang and Kim to provide displaying a time of the timer during the countdown time period includes displaying the time of the timer on a per second basis during the countdown time period for the purpose of improving image quality by easily assisting and alerting the user in image preparation before the shot is taken.
Regarding claim 5, Chiang and Kim disclose everything claimed as applied above (see claim 3). However, Chiang fails to disclose displaying a time of the timer during the countdown time period includes displaying the time of the timer as an overlay on the live image stream.
Nevertheless, Kim discloses displaying a time of the timer during the countdown time period includes displaying the time of the timer as an overlay on the live image stream (Count-down image 212 is superimposed over preview image 201; see fig. 2B and paragraph 0140).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Chiang and Kim to provide displaying a time of the timer during the countdown time period includes displaying the time of the timer as an overlay on the live image stream for the purpose of improving image quality by easily assisting and alerting the user in image preparation before the shot is taken.
Regarding claim 11, Chiang discloses a mobile device (Camera phone 100; see fig. 1 and paragraph 0025) comprising:
a display (Graphic unit 50/Displayer of camera 100; see fig. 1 and paragraph 0029);
a camera (Image sensor 10; see fig. 1);
motion detection (see paragraph 0032);
a memory (Memory unit 30, see fig. 1); and
a processor (CPU 40; see fig. 1 and paragraph 0025), wherein the processor is operable in accordance with stored operating instructions (Instructions; see paragraph 0026) to:
receive a motion detection output and determine whether the motion detection output remains within one or more threshold ranges during a motion detection time period (It is determined whether the electronic motion pattern is identical to a predetermined motion pattern defining an instruction of a functional operation of the digital camera, see step S53, fig. 5 and paragraph 0032); and
responsive to determining that the motion detection output remains within the one or more threshold ranges during the motion detection period (If YES at step S53, executing steps S54-S57; see fig. 5 and paragraph 0032. If YES at steps S71-S73, executing steps S74-S75; see fig. 7 and paragraph 0033. See flowchart and application of camera phone 100 in figures 5 and 7):
activate the camera (A predetermined manipulation manner of rotating the digital camera 100 in an angle back and forth (as shown in FIGS. 2a and 2b) can represent an instruction for taking a picture in a regular mode (one picture shooting at once); see paragraphs 0026, 0031, 0041);
display on the display a live image stream captured by the camera (Performing auto-focus/pre-procedure in the Preview Images; see steps S54-S55 in fig. 5 and step S74 in fig. 7, paragraphs 0032-0033);
start a timer (Steps S71-S73 are governed by a timer, in other words, in order to trigger the auto-focus step S74, each of the steps S71, S72, and S73 has to be fulfilled within a predetermined time period; see paragraph 0033);
upon expiration of timer, automatically control the camera to capture an image from the live image stream to produce a captured image (If Yes at step S53, a message indicating that the functional operation is about to be performed is rendered on the displayer of the digital camera (step S54) and the requisite pre-procedure(s) for the functional operation is performed (step S55). Next, the functional operation is performed to retrieve an object image (step S56); see fig. 5 and paragraph 0032. Following the step S74, an image is captured and processed (step S75); see fig. 7 and paragraph 0033); and
store an image file including the captured image in the memory (Finally, the retrieved image data is stored in a memory unit of the digital camera (step S57); see fig. 5 and paragraph 0032. Once the functional operation is performed to retrieve object images, the image data retrieved by the image sensor 10 is sent to the image processor 20 to be saved in the memory unit 30; see paragraph 0029).
However, Chiang does not expressly disclose an accelerometer and a count-down.
Nevertheless, Kim discloses at least one accelerometer (Acceleration sensor; see paragraphs 0130, 0135, 0136), receive an output of the at least one accelerometer (The sensing unit 140 for sensing a moved degree of the body, includes an acceleration sensor is activated; see paragraphs 0130, 0135, 0136), determine whether the output of the at least one accelerometer remains within one or more threshold ranges during a motion detection time period (Detecting if a moved degree sensed by the sensing unit satisfies a preset capturing condition; see paragraph 0131) and responsive to determining that output of the at least one accelerometer remains within one or more threshold ranges (The controller 180 of the mobile terminal 100 executes a timer capturing by determining it as a capturing time point and executes a consecutive capturing by generating a next capturing signal at a reference time interval, while the moved degree of the body is within a preset range. If it is determined that the moved degree of the mobile terminal does not satisfy the preset capturing condition or is out of the reference range, a timer capturing or a consecutive capturing is stopped; see paragraphs 0131, 0161, 0147) during the motion detection time period: activate the camera and display on the display a live image stream captured by the camera (A preview screen 201 is output to the display unit 151; see paragraph 0133 and fig. 2A); start a timer to count down time for a countdown time period (Once a moved degree of the mobile terminal 100 satisfies a preset capturing condition, the controller 180 executes a timer capturing of the camera 121a. An image 212 which changes according to a timer driving may be output to the display unit 151; see paragraph 0140 and fig. 2B) and upon expiration of the countdown time period, automatically control the camera to capture an image from the live image stream to produce a captured image (When the timer expires, a shutter of the camera 121a is operated to capture the preview screen 201 output to the display unit 151; see paragraph 0142 and fig. 2B).
Since Chiang already teaches displaying a message indicating that a functional operation is about to be performed (step S54; see fig. 5), then it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Chiang and Kim to provide at least one accelerometer, receive an output of the at least one accelerometer, determine whether the output of the at least one accelerometer remains within one or more threshold ranges during a motion detection time period and responsive to determining that output of the at least one accelerometer remains within one or more threshold ranges during the motion detection time period: activate the camera and display on the display a live image stream captured by the camera; start a timer to count down time for a countdown time period, upon expiration of the countdown time period, automatically control the camera to capture an image from the live image stream to produce a captured image for the purpose of providing an alternative/backup method of measuring motion while easily assisting and alerting the user in image preparation before the shot is taken.
Regarding claim 13, Chiang and Kim disclose everything claimed as applied above (see claim 11). However, Chiang fails to disclose display on the display the time of the timer on a per second basis during the countdown time period.
On the other hand, Kim discloses display on the display a time of the timer on a per second basis during the countdown time period (An image 212 which changes according to a timer driving is output to the display unit 151; see paragraph 0140, 0178 and figs. 2B, 6).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Chiang and Kim to provide display on the display the time of the timer on a per second basis during the countdown time period for the purpose of improving image quality by easily assisting and alerting the user in image preparation before the shot is taken.
Regarding claim 14, Chiang and Kim disclose everything claimed as applied above (see claim 11). However, Chiang fails to disclose display on the display the time of the timer as an overlay on the live image stream.
Nevertheless, Kim discloses display on the display a time of the timer as an overlay on the live image stream (Count-down image 212 is superimposed over preview image 201; see fig. 2B and paragraph 0140).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Chiang and Kim to provide display on the display the time of the timer as an overlay on the live image stream for the purpose of improving image quality by easily assisting and alerting the user in image preparation before the shot is taken.
Regarding claim 20, Chiang discloses an automated method (see figs. 5, 7) implementable by a processor (CPU 40; see fig. 1 and paragraph 0025) of a mobile device (Camera phone 100; see paragraph 0025) for capturing an image from a camera (Image sensor 10; see fig. 1) of the mobile device, the method comprising:
determining whether a motion detection output remains within the one or more threshold ranges during a motion detection time period (It is determined whether the electronic motion pattern is identical to a predetermined motion pattern defining an instruction of a functional operation of the digital camera, see step S53, fig. 5 and paragraph 0032);
responsive to determining that motion detection output remains within the one or more threshold ranges during a motion detection time period (If YES at step S53, executing steps S54-S57; see fig. 5 and paragraph 0032. If YES at steps S71-S73, executing steps S74-S75; see fig. 7 and paragraph 0033. See flowchart and application of camera phone 100 in figures 5 and 7):
activating the camera (A predetermined manipulation manner of rotating the digital camera 100 in an angle back and forth (as shown in FIGS. 2a and 2b) can represent an instruction for taking a picture in a regular mode (one picture shooting at once); see paragraphs 0026, 0031, 0041);
displaying, on a display of the mobile device, a live image stream captured by the camera (Performing auto-focus/pre-procedure in the Preview Images; see steps S54-S55 in fig. 5 and step S74 in fig. 7, paragraphs 0032-0033);
starting a timer (Steps S71-S73 are governed by a timer, in other words, in order to trigger the auto-focus step S74, each of the steps S71, S72, and S73 has to be fulfilled within a predetermined time period; see paragraph 0033),
upon expiration of timer, controlling the camera to capture an image from the live image stream to produce a captured image (If Yes at step S53, a message indicating that the functional operation is about to be performed is rendered on the displayer of the digital camera (step S54) and the requisite pre-procedure(s) for the functional operation is performed (step S55). Next, the functional operation is performed to retrieve an object image (step S56); see fig. 5 and paragraph 0032. Following the step S74, an image is captured and processed (step S75); see fig. 7 and paragraph 0033); and
storing, in a memory of the mobile device, an image file including the captured image (Finally, the retrieved image data is stored in a memory unit of the digital camera (step S57); see fig. 5 and paragraph 0032. Once the functional operation is performed to retrieve object images, the image data retrieved by the image sensor 10 is sent to the image processor 20 to be saved in the memory unit 30; see paragraph 0029).
However, Chiang does not expressly disclose an accelerometer and a count-down.
Nevertheless, Kim discloses determining whether an output of at least one accelerometer (The sensing unit 140 for sensing a moved degree of the body, includes an acceleration sensor is activated; see paragraphs 0130, 0135, 0136) remains within the one or more threshold ranges during a motion detection time period (Detecting if a moved degree sensed by the sensing unit satisfies a preset capturing condition; see paragraph 0131) and responsive to determining that output of at least one accelerometer remains within the one or more threshold ranges during a motion detection time period (The controller 180 of the mobile terminal 100 executes a timer capturing by determining it as a capturing time point and executes a consecutive capturing by generating a next capturing signal at a reference time interval, while the moved degree of the body is within a preset range. If it is determined that the moved degree of the mobile terminal does not satisfy the preset capturing condition or is out of the reference range, a timer capturing or a consecutive capturing is stopped; see paragraphs 0131, 0161, 0147) activating the camera and displaying, on a display of the mobile device, a live image stream captured by the camera (A preview screen 201 is output to the display unit 151; see paragraph 0133 and fig. 2A); starting a timer to count down time for a countdown time period (Once a moved degree of the mobile terminal 100 satisfies a preset capturing condition, the controller 180 executes a timer capturing of the camera 121a. An image 212 which changes according to a timer driving may be output to the display unit 151; see paragraph 0140 and fig. 2B), upon expiration of the countdown time period, controlling the camera to capture an image from the live image stream to produce a captured image (When the timer expires, a shutter of the camera 121a is operated to capture the preview screen 201 output to the display unit 151; see paragraph 0142 and fig. 2B).
Since Chiang already teaches displaying a message indicating that a functional operation is about to be performed (step S54; see fig. 5), then it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Chiang and Kim to provide determining whether an output of at least one accelerometer remains within the one or more threshold ranges during a motion detection time period and responsive to determining that output of at least one accelerometer remains within the one or more threshold ranges during a motion detection time period: activating a camera and displaying, on a display of the mobile device, a live image stream captured by the camera; starting a timer to count down time for a countdown time period, upon expiration of the countdown time period, controlling the camera to capture an image from the live image stream to produce a captured image the purpose of providing an alternative/backup method of measuring motion while easily assisting and alerting the user in image preparation before the shot is taken.
9. Claims 6-7, 10, 15-16 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Chiang in view of Kim and further in view of Fukuya et al. (US-PGPUB 2018/0198983).
Regarding claim 6, Chiang and Kim disclose everything claimed as applied above (see claim 1). However, Chiang and Kim fail to disclose the captured image includes identification information for an installed electronic device, the method further comprising: communicating, by the processor, the captured image to a remote provisioning server to facilitate provisioning of the installed electronic device in a management system.
On the other hand, Fukuya discloses the captured image includes identification information for an installed device, the method further comprising: communicating, by the processor, the captured image to a remote provisioning server to facilitate provisioning of the installed device in a management system (In the display section 17 of the image pickup apparatus 10, a blackboard image 202 based on the transferred blackboard specification data is displayed in a form in which the blackboard image 202 is superimposed on a live-view image; see fig. 5 and paragraph 0115. A combined image data attached with a blackboard image is generated; see fig. 7 and paragraphs 0133, 0140-0142. The control section executes predetermined document preparation processing on the basis of the related image and various kinds of information corresponding to the related image. Document data to be prepared is the document such as the recording report. After the predetermined document is prepared, recording processing of the document is performed. In the processing step, subsequently, online submission processing or the is performed concerning the document. The online submission processing is transferred to another terminal apparatus such as a public office in charge or a customer company from the terminal apparatus 40 via the network 70. Data of the recording report is uploaded to the external server 60; see paragraphs 0167, 0168, 0142).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Chiang, Kim and Fukuya to provide the captured image includes identification information for an installed electronic device, the method further comprising: communicating, by the processor, the captured image to a remote provisioning server to facilitate provisioning of the installed electronic device in a management system for the purpose of providing customizable image files that include detailed information of the captured object to improve the quality of the captured objects.
Regarding claim 7, Chiang and Kim disclose everything claimed as applied above (see claim 1). However, Chiang and Kim fail to disclose the image file includes metadata identifying a location of the mobile device or the camera at a time when the image was captured by the camera.
On the other hand, Fukuya discloses the image file includes metadata identifying a location of the mobile device or the camera at a time when the image was captured by the camera (A combined image data attached with a blackboard image is generated; see fig. 7 and paragraphs 0133, 0140-0142. The blackboard specification data includes information (information such as a construction work name and a location) common to the recording target construction work or the like set as a target, information (information such as an object name, a disposition position, and a size) peculiar to the recording object, and the like in addition to template image data of a blackboard image, frame image data for a simplified version of the blackboard image, and the like; see paragraph 0212).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Chiang, Kim and Fukuya to provide the image file includes metadata identifying a location of the mobile device or the camera at a time when the image was captured by the camera for the purpose of providing customizable image files that include detailed information of the captured object to improve the quality of the captured objects.
Regarding claim 10, Chiang and Kim disclose everything claimed as applied above (see claim 1). However, Chiang and Kim fail to disclose displaying, by the processor, a graphical box overlaying the live image stream for use as a guide for including selected image content within the captured image.
On the other hand, Fukuya discloses displaying, by the processor, a graphical box overlaying the live image stream for use as a guide for including selected image content within the captured image (In the display section 17 of the image pickup apparatus 10, a blackboard image 202 based on the transferred blackboard specification data is displayed in a form in which the blackboard image 202 is superimposed on a live-view image; see fig. 5 and paragraph 0115. A combined image data attached with an updated blackboard image is generated; see fig. 7 and paragraphs 0133, 0140-0142).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Chiang, Kim and Fukuya to provide displaying, by the processor, a graphical box overlaying the live image stream for use as a guide for including selected image content within the captured image for the purpose of providing customizable image files that include detailed information of the captured object to improve the quality of the captured objects.
Regarding claim 15, Chiang and Kim disclose everything claimed as applied above (see claim 11). However, Chiang and Kim fail to disclose the captured image includes identification information for an installed electronic device and wherein the processor is further operable in accordance with the stored operating instructions to: communicate the captured image to a remote provisioning server to facilitate provisioning of the installed electronic device in a management system.
On the other hand, Fukuya discloses the captured image includes identification information for an installed device and wherein the processor is further operable in accordance with the stored operating instructions to: communicate the captured image to a remote provisioning server to facilitate provisioning of the installed electronic device in a management system (In the display section 17 of the image pickup apparatus 10, a blackboard image 202 based on the transferred blackboard specification data is displayed in a form in which the blackboard image 202 is superimposed on a live-view image; see fig. 5 and paragraph 0115. A combined image data attached with a blackboard image is generated; see fig. 7 and paragraphs 0133, 0140-0142. The control section executes predetermined document preparation processing on the basis of the related image and various kinds of information corresponding to the related image. Document data to be prepared is the document such as the recording report. After the predetermined document is prepared, recording processing of the document is performed. In the processing step, subsequently, online submission processing or the is performed concerning the document. The online submission processing is transferred to another terminal apparatus such as a public office in charge or a customer company from the terminal apparatus 40 via the network 70. Data of the recording report is uploaded to the external server 60; see paragraphs 0167, 0168, 0142).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Chiang, Kim and Fukuya to provide the captured image includes identification information for an installed device and wherein the processor is further operable in accordance with the stored operating instructions to: communicate the captured image to a remote provisioning server to facilitate provisioning of the installed electronic device in a management system for the purpose of providing customizable image files that include detailed information of the captured object to improve the quality of the captured objects.
Regarding claim 16, Chiang and Kim disclose everything claimed as applied above (see claim 11). However, Chiang and Kim fail to disclose the image file includes metadata identifying a location of the mobile device or the camera at a time when the image was captured by the camera.
On the other hand, Fukuya discloses the image file includes metadata identifying a location of the mobile device or the camera at a time when the image was captured by the camera (A combined image data attached with a blackboard image is generated; see fig. 7 and paragraphs 0133, 0140-0142. The blackboard specification data includes information (information such as a construction work name and a location) common to the recording target construction work or the like set as a target, information (information such as an object name, a disposition position, and a size) peculiar to the recording object, and the like in addition to template image data of a blackboard image, frame image data for a simplified version of the blackboard image, and the like; see paragraph 0212).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Chiang, Kim and Fukuya to provide the image file includes metadata identifying a location of the mobile device or the camera at a time when the image was captured by the camera for the purpose of providing customizable image files that include detailed information of the captured object to improve the quality of the captured objects.
Regarding claim 19, Chiang and Kim disclose everything claimed as applied above (see claim 11). However, Chiang and Kim fail to disclose the processor is further operable in accordance with the stored operating instructions to: display a graphical box overlaying the live image stream on the display, the graphical box being usable as a guide for including selected image content within the captured image.
Nevertheless, Fukuya discloses the processor is further operable in accordance with the stored operating instructions to: display a graphical box overlaying the live image stream on the display, the graphical box being usable as a guide for including selected image content within the captured image (In the display section 17 of the image pickup apparatus 10, a blackboard image 202 based on the transferred blackboard specification data is displayed in a form in which the blackboard image 202 is superimposed on a live-view image; see fig. 5 and paragraph 0115. A combined image data attached with an updated blackboard image is generated; see fig. 7 and paragraphs 0133, 0140-0142).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Chiang, Kim and Fukuya to provide the processor is further operable in accordance with the stored operating instructions to: display a graphical box overlaying the live image stream on the display, the graphical box being usable as a guide for including selected image content within the captured image for the purpose of providing customizable image files that include detailed information of the captured object to improve the quality of the captured objects.
10. Claims 8-9 and 17-18 are rejected under 35 U.S.C. 103 as being unpatentable over Chiang in view of Kim and further in view of Kalama (US-PGPUB 2015/0256740).
Regarding claim 8, Chiang and Kim disclose everything claimed as applied above (see claim 1). However, Chiang and Kim fail to disclose prior to receiving the output of the at least one accelerometer, displaying, by the processor, a prompt on the display of the mobile device, the prompt instructing a user of the mobile device to shake the mobile device.
On the other hand, Kalama discloses prior to receiving motion detection outputs (Determining the device's azimuth and pitch with a compass and an accelerometer; see paragraph 0048), displaying, by the processor, a prompt on the display of the mobile device, the prompt instructing a user of the mobile device to shake the mobile device (Prompt the user to move the digital camera into a range of orientations, and initiating operation of the digital camera to capture a digital image of the target item once the camera has moved as requested. The prompting by using visual feedback presented by text; see claim 14 of Kalama and paragraphs 0040-0042).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Chiang, Kim and Kalama to provide prior to receiving the output of the at least one accelerometer, displaying, by the processor, a prompt on the display of the mobile device, the prompt instructing a user of the mobile device to shake the mobile device for the purpose of easily guiding and assisting the user to capture high quality images.
Regarding claim 9, Chiang, Kim and Kalama disclose everything claimed as applied above (see claim 8). However, Chiang and Kim fail to disclose the prompt is non-textual.
On the other hand, Kalama discloses the prompt is non-textual (The prompting provided with audio output from speakers or flashing lights; see paragraphs 0040-0041).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Chiang, Kim and Kalama to provide the prompt is non-textual for the purpose of providing different prompting techniques in accordance to user feedback preferences.
Regarding claim 17, Chiang and Kim disclose everything claimed as applied above (see claim 11). However, Chiang and Kim fail to disclose the processor is further operable in accordance with the stored operating instructions to: display a prompt on the display prior to receiving the output of the at least one accelerometer, the prompt instructing a user of the mobile device to shake the mobile device.
Nevertheless, Kalama discloses the processor is further operable in accordance with the stored operating instructions to: display a prompt on the display prior to receiving the motion detection output (Determining the device's azimuth and pitch with a compass and an accelerometer; see paragraph 0048), the prompt instructing a user of the mobile device to shake the mobile device (Prompt the user to move the digital camera into a range of orientations, and initiating operation of the digital camera to capture a digital image of the target item once the camera has moved as requested. The prompting by using visual feedback presented by text; see claim 14 of Kalama and paragraphs 0040-0042).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Chiang, Kim and Kalama to provide the processor is further operable in accordance with the stored operating instructions to: display a prompt on the display prior to receiving the output of the at least one accelerometer, the prompt instructing a user of the mobile device to shake the mobile device for the purpose of easily guiding and assisting the user to capture quality images.
Regarding claim 18, Chiang, Kim and Kalama disclose everything claimed as applied above (see claim 17). However, Chiang and Kim fail to disclose the prompt is non-textual.
On the other hand, Kalama discloses the prompt is non-textual (The prompting provided with audio output from speakers or flashing lights; see paragraphs 0040-0041).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Chiang, Kim and Kalama to provide the prompt is non-textual for the purpose of providing different prompting techniques in accordance to user feedback preferences.
Conclusion
11. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
12. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CYNTHIA CALDERON whose telephone number is (571)270-3580. The examiner can normally be reached M-F 9:00 AM-5:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, TWYLER HASKINS can be reached at (571)272-7406. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CYNTHIA CALDERON/Primary Examiner, Art Unit 2639
01/14/2026