DETAILED ACTION
This Office Action is in response to the application filed on 02/14/2024, wherein claims 1-30 have been examined and are pending.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
1. Claims 1, 11, 13, 20, 22 and 26 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Vasconcelos et al. (U.S. 2024/0196084) hereinafter Vasconcelos.
Regarding claims 1 and 22, Vasconcelos discloses a first wireless device, and a method for wireless communication at a first wireless device, comprising:
a processing system that includes processor circuitry and memory circuitry that stores code, the processing system configured to cause the first wireless device to (Vasconcelos Figs. 1-2, [0031]-[0031]: cameras 120 each includes a processor, a memory, and camera application 103a):
establish a wireless communication link with a second wireless device based at least in part on a composite content capture associated with the first wireless device and the second wireless device (Vasconcelos [0035]-[0036]: one of the cameras 120 can be designated as master camera which can instructs the other cameras to make modifications. The master camera 120a captures an initial image of an object, receives subsequent images from other cameras 120 and determine a synchronization error; [0038], [0043], [0132]: wireless connection can be used);
receive, via the wireless communication link and according to one or more content capture parameters associated with the first wireless device and the second wireless device, a first image captured by a sensor at the second wireless device (Vasconcelos [0035]-[0036]: The master camera 120a captures an initial image of an object, receives subsequent images from other cameras 120. Determine a synchronization error between first image of first camera and second image of second camera based on overlap between the images; [0038], [0043], [0132]: wireless connection can be used); and
transmit, via the wireless communication link based at least in part on one or more alignment differences between the first image and a second image, adjustment information that indicates one or more adjustment parameters for adjustment of a position of the sensor of the second wireless device based at least in part on the one or more alignment differences between the first image and the second image (Vasconcelos Fig. 15, [0035]: camera application 103a of a first camera 120a determines a synchronization error between first image of first camera and second image of second camera based on overlap between the images and generates instructions for a second camera 120 to change their positions to reduce a synchronization error between images; [0124], instructions for changing position of second camera based on the synchronization error are generated and hence transmitted to the second camera; [0081]-[0084], [0091]-[0097]: locations of keypoints in first and second images are used to determine overlap of first image and second image. Hence, alignment differences between the first and second images; [0040], [0108], Claim 7, Figs. 13-14: camera application 103a of camera 120 can generate on a user interface guidance to change position of the camera such as displayed arrow 1315 so that the synchronization error falls below a predetermined threshold).
Regarding claims 13 and 26, Vasconcelos discloses a second wireless device, and a method for wireless communications at a second wireless device, comprising:
a processing system that includes processor circuitry and memory circuitry that stores code, the processing system configured to cause the second wireless device to (Vasconcelos Figs. 1-2, [0031]-[0031]: cameras 120 each includes a processor, a memory, and camera application 103a):
establish a wireless communication link with a first wireless device based at least in part on a composite content capture associated with the first wireless device and the second wireless device (Vasconcelos [0035]-[0036]: one of the cameras 120 can be designated as master camera which can instructs the other cameras to make modifications. The master camera 120a captures an initial image of an object, receives subsequent images from other cameras 120 and determine a synchronization error; [0038], [0043], [0132]: wireless connection can be used);
transmit, via the wireless communication link and according to one or more content capture parameters associated with the first wireless device and the second wireless device, a first image captured by a sensor at the second wireless device (Vasconcelos [0035]-[0036]: The master camera 120a captures an initial image of an object, receives subsequent images from other cameras 120. Determine a synchronization error between first image of first camera and second image of second camera based on overlap between the images. Hence, other cameras 120 transmit captured images to camera 120a); and
receive, via the wireless communication link based at least in part on one or more alignment differences between the first image and a second image, adjustment information that indicates one or more adjustment parameters for adjustment of a position of the sensor at the second wireless device based at least in part on the one or more alignment differences between the first image and the second image (Vasconcelos Fig. 15, [0035]: camera application 103a of a first camera 120a determines a synchronization error between first image of first camera and second image of second cameras based on overlap between the images and generates instructions for a second camera 120 to change their positions to reduce a synchronization error between images. Hence, second cameras 120 receives instruction to change their positions; [0124], instructions for changing position of second camera based on the synchronization error are generated and hence transmitted to the second camera; [0081]-[0084], [0091]-[0097]: locations of keypoints in first and second images are used to determine overlap of first image and second image. Hence, alignment differences between the first and second images; [0040], [0108], Claim 7, Figs. 13-14: camera application 103a of camera 120 can generate on a user interface guidance to change position of the camera such as displayed arrow 1315 so that the synchronization error falls below a predetermined threshold).
Regarding claims 11 and 20, Vasconcelos discloses all limitation of claims 1 and 13, respectively.
Vasconcelos discloses wherein the second image is captured by a second sensor at the first wireless device or by a third sensor at a third wireless device (Vasconcelos [0035]-[0036]: The master camera 120a captures an initial image of an object, receives subsequent images from other cameras 12. Determine a synchronization error between first image of first camera and second image of second camera based on overlap between the images; Fig. 15, [0035], [0081]-[0084], [0091]-[0097]: camera application 103a of a first camera 120a determines a synchronization error between first image of first camera and second image of second camera based on overlap between the images and generates instructions for a second camera 120 to change their positions to reduce a synchronization error between images).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
2. Claims 2, 9, 14, 23 and 27 are rejected under 35 U.S.C. 103 as being unpatentable over Vasconcelos et al. (U.S. 2024/0196084) hereinafter Vasconcelos, in view of Campbell (U.S. 9,204,041).
Regarding claims 2 and 23, Vasconcelos discloses all limitation of claims 1 and 22, respectively.
Vasconcelos does not explicitly disclose wherein the processing system is further configured to cause the first wireless device to: receive, via the wireless communication link, at least one capability message comprising one or more parameters associated with the sensor at the second wireless device; and transmit, via the wireless communication link based at least in part on the one or more parameters indicated via the capability message, a content capture configuration message comprising the one or more content capture parameters for the composite content capture, wherein the first image captured by the sensor at the second wireless device is based at least in part on the content capture configuration message.
However, Campbell discloses the processing system is further configured to cause the first wireless device to: receive, via the wireless communication link, at least one capability message comprising one or more parameters associated with the sensor at the second wireless device; and transmit, via the wireless communication link based at least in part on the one or more parameters indicated via the capability message, a content capture configuration message comprising the one or more content capture parameters for the composite content capture, wherein the first image captured by the sensor at the second wireless device is based at least in part on the content capture configuration message (Campbell Col. 5, lines 43-55: cameras includes processor and storage; Col. 4, lines 25-47, Col. 5, lines 60-67, Col. 6, lines 1-30: each slave camera send their setting to the master camera, hence master camera receives capability message comprising parameters of the slave cameras. The master camera then defines the appropriate setting for each camera and then images are captured from each of the cameras. Hence, master camera transmits content capture configuration message comprising content capture parameters; Col. 3, lines 55-60: captured images or video can be combined to form single image or video frame).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to use the method and system, as disclosed by Vasconcelos , and further incorporate having the processing system is further configured to cause the first wireless device to: receive, via the wireless communication link, at least one capability message comprising one or more parameters associated with the sensor at the second wireless device; and transmit, via the wireless communication link based at least in part on the one or more parameters indicated via the capability message, a content capture configuration message comprising the one or more content capture parameters for the composite content capture, wherein the first image captured by the sensor at the second wireless device is based at least in part on the content capture configuration message, as taught by Campbell, to synchronize captured images of multiple cameras so the images can be stitched together to create panoramic images or video (Campbell Col. 4, lines 5-30).
Regarding claims 14 and 27, Vasconcelos discloses all limitation of claims 13 and 26, respectively.
Vasconcelos does not explicitly disclose wherein the processing system is further configured to cause the wherein the processing system is further configured to cause the second wireless device to: transmit, via the wireless communication link, a capability message comprising one or more parameters associated with the sensor at the second wireless device; and receive, via the wireless communication link based at least in part on the one or more parameters indicated via the capability message, a content capture configuration message comprising the one or more content capture parameters for the composite content capture, wherein the first image captured by the sensor at the second wireless device is based at least in part on the content capture configuration message.
However, Campbell discloses wherein the processing system is further configured to cause the wherein the processing system is further configured to cause the second wireless device to: transmit, via the wireless communication link, a capability message comprising one or more parameters associated with the sensor at the second wireless device; and receive, via the wireless communication link based at least in part on the one or more parameters indicated via the capability message, a content capture configuration message comprising the one or more content capture parameters for the composite content capture, wherein the first image captured by the sensor at the second wireless device is based at least in part on the content capture configuration message (Campbell Col. 5, lines 43-55: cameras includes processor and storage; Col. 4, lines 25-47, Col. 5, lines 60-67, Col. 6, lines 1-30: each slave camera send their setting to the master camera, hence slave cameras transmit capability message comprising parameters of the slave cameras. The master camera then defines the appropriate setting for each camera and then images are captured from each of the cameras. Hence, slave cameras receive capture configuration message comprising content capture parameters; Col. 3, lines 55-60: captured images or video can be combined to form single image or video frame).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to use the method and system, as disclosed by Vasconcelos , and further incorporate having wherein the processing system is further configured to cause the wherein the processing system is further configured to cause the second wireless device to: transmit, via the wireless communication link, a capability message comprising one or more parameters associated with the sensor at the second wireless device; and receive, via the wireless communication link based at least in part on the one or more parameters indicated via the capability message, a content capture configuration message comprising the one or more content capture parameters for the composite content capture, wherein the first image captured by the sensor at the second wireless device is based at least in part on the content capture configuration message, as taught by Campbell, to synchronize captured images of multiple cameras so the images can be stitched together to create panoramic images or video (Campbell Col. 4, lines 5-30).
Regarding claim 9, Vasconcelos discloses all limitation of claim 1.
Vasconcelos does not explicitly disclose wherein the one or more content capture parameters associated with the first wireless device and the second wireless device comprise one or more of a tone, an aspect ratio, a resolution, a content capture rate, or any combination thereof.
However, Campbell disclose wherein the one or more content capture parameters associated with the first wireless device and the second wireless device comprise one or more of a tone, an aspect ratio, a resolution, a content capture rate, or any combination thereof (Campbell Col. 5, lines 50-67, Col. 6, lines 1-30: master camera may configure various settings of slave camera such as frame rate, exposure time, resolution, color and other operating parameters).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to use the method and system, as disclosed by Vasconcelos , and further incorporate having wherein the one or more content capture parameters associated with the first wireless device and the second wireless device comprise one or more of a tone, an aspect ratio, a resolution, a content capture rate, or any combination thereof, as taught by Campbell, to synchronize captured images of multiple cameras so the images can be stitched together to create panoramic images or video (Campbell Col. 4, lines 5-30).
3. Claims 3, 9, 12, 15, 18, 21, 24 and 28 are rejected under 35 U.S.C. 103 as being unpatentable over Vasconcelos et al. (U.S. 2024/0196084) hereinafter Vasconcelos, in view of Sun et al. (U.S. 2025/0184621) hereinafter Sun.
Regarding claims 3 and 24, Vasconcelos discloses all limitation of claims 1 and 22, respectively.
Vasconcelos does not explicitly disclose wherein the processing system is further configured to cause the first wireless device to: receive, via the wireless communication link based at least in part on the adjustment information, an adjustment of the first image associated with the second wireless device; and generate a composite image based at least in part on the second image and the adjustment of the first image.
However, Sun discloses the processing system is further configured to cause the first wireless device to: receive, via the wireless communication link based at least in part on the adjustment information, an adjustment of the first image associated with the second wireless device; and generate a composite image based at least in part on the second image and the adjustment of the first image (Sun Figs. 1-3, [0015]-[0016]: multiple secondary camera devices receive and apply camera parameters set by a primary camera device, wherein the camera parameters include camera orientation as in step 120, hence adjustment information; [0021]-[0022]: the camera devices then capture video of a scene, i.e. an adjustment of an image, and transmit the video to one or more peer camera devices to form a composite video of the scene. Hence, a first peer camera receives an adjustment of an image, which is image captured after the secondary camera applies the camera parameters, and generate composite image based on all captured images; [0066]: wireless communication can be used).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to use the method and system, as disclosed by Vasconcelos , and further incorporate having the processing system is further configured to cause the first wireless device to: receive, via the wireless communication link based at least in part on the adjustment information, an adjustment of the first image associated with the second wireless device; and generate a composite image based at least in part on the second image and the adjustment of the first image, as taught by Sun, to synchronize camera parameters so that video quality and visual aspect across devices are consistent with each other (Sun [0015]).
Regarding claims 9 and 18, Vasconcelos discloses all limitation of claims 1 and 13, respectively.
Vasconcelos does not explicitly disclose wherein the one or more content capture parameters associated with the first wireless device and the second wireless device comprise one or more of a tone, an aspect ratio, a resolution, a content capture rate, or any combination thereof.
However, Sun discloses wherein the one or more content capture parameters associated with the first wireless device and the second wireless device comprise one or more parameters associated with a tone, an aspect ratio, a resolution, a content capture rate, or any combination thereof (Sun Figs. 1-3, [0015]-[0016]: multiple secondary camera devices receive and apply camera parameters set by a primary camera device, wherein the camera parameters include camera orientation, capture frequency, resolution, color grate, and white balance).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to use the method and system, as disclosed by Vasconcelos , and further incorporate having wherein the one or more content capture parameters associated with the first wireless device and the second wireless device comprise one or more parameters associated with a tone, an aspect ratio, a resolution, a content capture rate, or any combination thereof, as taught by Sun, to synchronize camera parameters so that video quality and visual aspect across devices are consistent with each other (Sun [0015]).
Regarding claims 12 and 21, Vasconcelos discloses all limitation of claims 1 and 13, respectively.
Vasconcelos does not explicitly disclose wherein the wireless communication link is established via a distributed coordination function.
However, Sun discloses wherein the wireless communication link is established via a distributed coordination function (Sun [0019], [0022], [0030], [0015]-[0017], Figs. 1-3: peer to peer cameras can be used wherein a camera can capture and transmit captured video to one or more peer camera devices to form composite video of a scene, wherein communication can use ad hoc network as in [0066], hence a distributed coordination function).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to use the method and system, as disclosed by Vasconcelos , and further incorporate having wherein the wireless communication link is established via a distributed coordination function, as taught by Campbell, for communication between camera devices (Sun [0066]).
Regarding claims 15 and 28, Vasconcelos discloses all limitation of claims 13 and 26, respectively.
Vasconcelos discloses wherein the processing system is further configured to cause the second wireless device to: output, to a user interface of the second wireless device, the adjustment information, and adjustment of the first image based at least in part on outputting the adjustment information via the user interface (Vasconcelos [0040], [0108], Claim 7, Figs. 13-14: camera application 103a of camera 120 can generate on a user interface guidance to change position of the camera such as displayed arrow 1315 so that the synchronization error falls below a predetermined threshold, to guide the user to move the camera to capture image. Hence, subsequent captured images are adjustment of a first image).
Vasconcelos does not explicitly disclose transmit, via the wireless communication link based at least in part on an adjustment information, an adjustment of the first image associated with the second wireless device.
However, Sun discloses transmit, via the wireless communication link based at least in part on an adjustment information, an adjustment of the first image associated with the second wireless device (Sun Figs. 1-3, [0015]-[0016]: multiple secondary camera devices receive and apply camera parameters set by a primary camera device, wherein the camera parameters include camera orientation as in step 120, hence adjustment information; [0021]-[0022]: the camera devices then capture video of a scene and transmit the video to one or more peer camera devices to form a composite video of the scene. Hence, capture and transmit an adjustment of an image associated with secondary cameras based on adjustment information received).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to use the method and system, as disclosed by Vasconcelos , and further incorporate transmitting, via the wireless communication link based at least in part on an adjustment information, an adjustment of the first image associated with the second wireless device, as taught by Sun, to synchronize camera parameters so that video quality and visual aspect across devices are consistent with each other (Sun [0015]).
4. Claims 4-5, 16 and 29 are rejected under 35 U.S.C. 103 as being unpatentable over Vasconcelos et al. (U.S. 2024/0196084) hereinafter Vasconcelos, in view of Sun et al. (U.S. 2025/0184621) hereinafter Sun, further in view of Woodman et al. (U.S. 9,521,398) hereinafter Woodman.
Regarding claim 4, Vasconcelos and Sun discloses all limitation of claim 3.
Vasconcelos discloses wherein the first wireless device captures the second image (Vasconcelos [0035]-[0036]: The master camera 120a captures an initial image of an object, receives subsequent images from other cameras 12. Determine a synchronization error between first image of first camera and second image of second camera based on overlap between the images; Fig. 15, [0035], [0081]-[0084], [0091]-[0097]: camera application 103a of a first camera 120a determines a synchronization error between first image of first camera and second image of second camera based on overlap between the images and generates instructions for a second camera 120 to change their positions to reduce a synchronization error between images).
Vasconcelos does not explicitly disclose wherein the processing system is further configured to cause the first wireless device to: transmit, via the wireless communication link, the composite image, and wherein the composite image comprises a 360 degree view associated with a position of the first wireless device and the position of the second wireless device.
However, Sun discloses transmit, via the wireless communication link, the composite image (Sun [0021]-[0022]: the camera devices then capture video of a scene, i.e. an adjustment of an image, and transmit the video to one or more peer camera devices to form a composite video of the scene; [0030]: each camera can transmit its recorded video to every other camera; [0066]: the device can perform wireless communication).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to use the method and system, as disclosed by Vasconcelos and Sun, and further incorporate having the cameras can transmit its video to every other camera, wherein the video includes the composite video, as taught by Sun, for seamless capture, review, editing and compilation of video captured by multiple camera devices (Sun [0012]-[0013]).
Further, Woodman discloses composite image comprises a 360 degree view associated with a position of the first wireless device and the position of the second wireless device (Woodman Col. 3, lines 9-25: captured images from cameras are stitched together to create composite image allowing for 360 degree view).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to use the method and system, as disclosed by Vasconcelos and Sun, and further incorporate having the composite image comprises a 360 degree view associated with a position of the first wireless device and the position of the second wireless device, as taught by Woodman, to image all around the cameras as desired view (Woodman Col. 3, lines 9-25).
Regarding claims 16 and 29, Vasconcelos and Sun disclose all limitation of claims 15 and 28, respectively.
Vasconcelos discloses wherein the first wireless device captures the second image (Vasconcelos [0035]-[0036]: The master camera 120a captures an initial image of an object, receives subsequent images from other cameras 12. Determine a synchronization error between first image of first camera and second image of second camera based on overlap between the images; Fig. 15, [0035], [0081]-[0084], [0091]-[0097]: camera application 103a of a first camera 120a determines a synchronization error between first image of first camera and second image of second camera based on overlap between the images and generates instructions for a second camera 120 to change their positions to reduce a synchronization error between images).
Vasconcelos does not explicitly disclose wherein the processing system is further configured to cause the second wireless device to: receive, via the wireless communication link, a composite image based at least in part on a concatenation of the second image and the adjustment of the first image, and wherein the composite image comprises a 360 degree view associated with a position of the first wireless device and the position of the sensor at the second wireless device.
However, Sun discloses the second wireless device to: receive, via the wireless communication link, a composite image based at least in part on a concatenation of the second image and the adjustment of the first image (Sun [0021]-[0022]: the camera devices then capture video of a scene, i.e. an adjustment of an image, and transmit the video to one or more peer camera devices to form a composite video of the scene; [0030]: each camera can transmit its recorded video to every other camera; [0066]: the device can perform wireless communication).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to use the method and system, as disclosed by Vasconcelos and Sun, and further incorporate having the second wireless device to: receive, via the wireless communication link, a composite image based at least in part on a concatenation of the second image and the adjustment of the first image, as taught by Sun, for seamless capture, review, editing and compilation of video captured by multiple camera devices (Sun [0012]-[0013]).
Further, Woodman discloses the composite image comprises a 360 degree view associated with a position of the first wireless device and the position of the second wireless device (Woodman Col. 3, lines 9-25: captured images from cameras are stitched together to create composite image allowing for 360 degree view).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to use the method and system, as disclosed by Vasconcelos and Sun, and further incorporate having the composite image comprises a 360 degree view associated with a position of the first wireless device and the position of the second wireless device, as taught by Woodman, to image all around the cameras as desired view (Woodman Col. 3, lines 9-25).
Regarding claim 5, Vasconcelos and Sun discloses all limitation of claim 3.
Vasconcelos discloses wherein the processing system is further configured to cause the first wireless device to: establish a second wireless communication link with a third wireless device based at least in part on the composite content capture associated with the first wireless device and the second wireless device; receive, via the second wireless communication link, a third image captured by a third sensor at the third wireless device; transmit, via the second wireless communication link based at least in part on one or more second alignment differences between the first image and the third image, second adjustment information that indicates one or more second adjustment parameters for adjustment of a position of the third wireless device based at least in part on the one or more second alignment differences between the first image and the second image (Vansconcelos Fig. 1, device (Vasconcelos [0035]-[0036]: one of the cameras 120 can be designated as master camera which can instructs the other cameras to make modifications; [0035]-[0036]: The master camera 120a captures an initial image of an object, receives subsequent images from other cameras 120. Determine a synchronization error between first image of first camera and second image of second camera based on overlap between the images).
Vansconcelos does not explicitly disclose receive, via the second wireless communication link based at least in part on the adjustment information, an adjustment of the third image, wherein the composite image is further based at least in part on a concatenation of the second image, the adjustment of the first image, and the adjustment of the third image.
However, Sun discloses receive, via the second wireless communication link based at least in part on the adjustment information, an adjustment of the third image (Sun Figs. 1-3, [0015]-[0016]: multiple secondary camera devices receive and apply camera parameters set by a primary camera device, wherein the camera parameters include camera orientation as in step 120, hence adjustment information; [0021]-[0022]: the camera devices then capture video of a scene, i.e. an adjustment of a first image and of a third image each of one of the camera devices, and transmit the video to one or more peer camera devices to form a composite video of the scene. Hence, a first peer camera receives an adjustment of a third image, which is image captured after the secondary camera applies the camera parameters, and generate composite image based on all captured images; [0011], [0022]: video taken from different cameras can be stitched together to create composite video; [0066]: wireless communication can be used).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to use the method and system, as disclosed by Vasconcelos and Sun, and further incorporate receiving, via the second wireless communication link based at least in part on the adjustment information, as taught by Sun, for seamless capture, review, editing and compilation of video captured by multiple camera devices (Sun [0012]-[0013]).
Furthermore, Woodman discloses wherein the sensor at the second wireless device is associated with a second field of view that is at least partially overlapping with a third field of view associated with the third sensor at the third wireless device; and the composite image is further based at least in part on a concatenation of the second image, the adjustment of the first image, and the adjustment of the third image (Woodman Col. 11, lines 22-67, Col. 12, lines 1-25: each camera sends their settings to a master camera. The master camera then defines the appropriate setting for each camera. The master camera sends commands to the slave cameras to configure various settings and synchronization image capture and multiples image are captured from the cameras. Hence, first and second slave cameras captures adjustment of first image and adjustment of third images, respectively. Image from master camera is second image; Col. 2, lines 10-20, Col. 3, lines 9-67, Col. 4, lines 1-45: images captured from multiple cameras can be combined to create a panoramic image, wherein the cameras can include first, second, third, fourth cameras, or six cameras; Col. 11, lines 30-67: four cameras are oriented to
0
0
,
90
,
180
0
,
270
0
, respectively, to capture panoramic images, hence, the composite image is based on concatenation of second image and adjustment of first image and adjustment of third image. The captured video from each of the cameras at least partially overlap with image of the neighboring cameras).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to use the method and system, as disclosed by Vasconcelos , and further incorporate having the sensor at the second wireless device is associated with a second field of view that is at least partially overlapping with a third field of view associated with the third sensor at the third wireless device; and the composite image is further based at least in part on a concatenation of the second image, the adjustment of the first image, and the adjustment of the third image, as taught by Campbell, for full panoramic or spherical images (Woodman Col. 11, lines 30-67).
5. Claims 10, 19 and 25 are rejected under 35 U.S.C. 103 as being unpatentable over Vasconcelos et al. (U.S. 2024/0196084) hereinafter Vasconcelos, in view of Woodman et al. (U.S. 9,521,398) hereinafter Woodman.
Regarding claims 10, 19 and 25, Vasconcelos discloses all limitation of claims 1, 13 and 22 respectively.
Vasconcelos discloses wherein the processing system is further configured to cause the first wireless device to: the first image comprise a first video; the second image comprise a second video (Vasconcelos [0031]: the camera captures video).
Vasconcelos does not explicitly receive the first image comprises receiving a live stream of the first video.
However, Woodman discloses receive the first image comprises receiving a live stream of the first video (Woodman Col. 12, lines 15-25, Col. 6, lines 5-12: the capture images or video can be wireless streamed to a remote device for live viewing).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to use the method and system, as disclosed by Vasconcelos , and further incorporate having wherein the one or more content capture parameters associated with the first wireless device and the second wireless device comprise one or more of a tone, an aspect ratio, a resolution, a content capture rate, or any combination thereof, as taught by Campbell, for live viewing of the captured video from other devices (Woodman Col. 12, lines 15-25).
Allowable Subject Matter
Claims 6-7, 8, 17 and 30 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is an examiner’s statement of reasons for allowance:
In light of the specification, the Examiner finds the claimed invention to be patentably distinct from the prior art of record.
Regarding claim 6, the prior arts of record, taken individually or in combination fail to explicitly teach or render obvious within the context of the claims the feature of the processing system is further configured to cause the first wireless device to: obtain the one or more alignment differences between the first image and the second image, wherein each of the one or more alignment differences comprises a respective distance between a first pixel in the first image and a corresponding second pixel in the second image exceeding a threshold distance as cited in claim 6.
Regarding claim 8, the prior arts of record, taken individually or in combination fail to explicitly teach or render obvious within the context of the claims the feature of the processing system is further configured to cause the first wireless device to: generate the adjustment information based at least in part on an identifier of the second wireless device, wherein the identifier of the second wireless device indicates an order of the position of the sensor at the second wireless device for the composite content capture as cited in claim 8.
Regarding claim 17, the prior arts of record, taken individually or in combination fail to explicitly teach or render obvious within the context of the claims the feature of wherein, to receive the adjustment information, the processing system is configured to cause the second wireless device to: receive, via the wireless communication link, a distance associated with the adjustment of the position of the sensor at the second wireless device, wherein the distance is based at least in part on a respective distance between a first pixel in the first image and a corresponding second pixel in the second image as cited in claim 17.
Regarding claim 30, the prior arts of record, taken individually or in combination fail to explicitly teach or render obvious within the context of the claims the feature of wherein receiving the adjustment information comprises: receiving, via the wireless communication link, a distance associated with the adjustment of the position of the sensor at the second wireless device, wherein the distance is based at least in part on a respective distance between a first pixel in the first image and a corresponding second pixel in the second image as cited in claim 30.
Claims 7 are allowable because they depend on allowable parent claim 6 as set forth above.
Any comments considered necessary by applicant must be submitted no later than the payment of the issue fee and, to avoid processing delays, should preferably accompany the issue fee. Such submissions should be clearly labeled “Comments on Statement of Reasons for Allowance.”
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KATHLEEN V NGUYEN whose telephone number is (571)270-0626. The examiner can normally be reached on M-F 9:00am-6:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jamie Atala can be reached on 571-272-7384. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/KATHLEEN V NGUYEN/Primary Examiner, Art Unit 2486