F I N A L A C T I O N
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
(pages 8-11 made by Applicant’s Representative – filed 8/08/25)
Applicant’s arguments regarding claim interpretations construed under 35 U.S.C. 112(f) are persuasive in view of the recent claim amendments. Therefore, all previous interpretations under 35 U.S.C. 112(f) are no longer invoked and have been withdrawn.
Applicant's arguments (pages 9-10) regarding the 35 USC 103 rejection to independent claims (1, 13, 15 & 16) have been fully considered and are persuasive in view of the recent claim amendments. Therefore, this rejection has been withdrawn.
However, upon further consideration, a new ground(s) of rejection is made to claims (1, 13, 15 & 16) in view of newly evidenced art, which is taken in obviousness combination with the previous applied references, which are still considered pertinent to the claimed invention as respectively noted under “Closest Prior Art” section and “35 USC 103 Rejection” section detailed below.
Therefore, Applicant's arguments (page 9-10) regarding the 35 USC 103 rejection to independent claims (1, 13, 15 & 16) have been considered but are moot because the arguments do not apply to the combination of references being used in the current rejection.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action.
Claim Rejections – 35 USC § 112(b) – Definiteness Requirement
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-6, 8-12 and 14-17 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 1 (lines 21-23) recites (with emphasis in bold): “determine the common set value to be applied to the plurality of imaging devices based on the information regarding the detection result acquired by the acquisition unit”. There is insufficient antecedent basis for the language “the acquisition unit” in the claim.
For the purposes of applying prior art, “the acquisition unit” recited in claim 1 will be interpreted as referring to the feature performed in lines 19-20 which recites acquire the information regarding the detection result of the sensor of each of the two or more imaging devices.
Claims 2-6, 8-12 and 14 are also rejected under 35 U.S.C. 112(b) for respectively depending from rejected claim 1.
Correction is required.
Claim 8 (in line 2) recites “wherein each of the plurality of imaging devices includes the sensor”.
Similarly, claim 10 (in lines 2-3) recites “wherein each of the plurality of imaging devices includes: the sensor”.
Claim 15 (in lines 3-5) recites “acquiring information regarding a detection result of a sensor that detects external brightness, the sensor being included in each of two or more imaging devices among a plurality of imaging devices”.
Similarly, claim 16 (in lines 2-4) recites “acquiring information regarding a detection result of a sensor that detects external brightness, the sensor being included in each of two or more imaging devices among a plurality of imaging devices”.
As written, claims 8, 10, 15 and 16 appear to require that each of two or more (or plurality of) imaging devices among a plurality of imaging devices include the same sensor that detects external brightness.
However, the specification describes a system in which each imaging device includes a separate sensor that detects external brightness (e.g. Figures 1-3: each imaging device 2 has a separate sensor 3, para [0037-0039]).
Based on this inconsistency, it is unclear whether the claims require each of two or more (or plurality of) imaging devices among a plurality of imaging devices share a common sensor, or whether the claims require an arrangement where each imaging device has a separate sensor that detects external brightness.
See MPEP 2173.03: “A claim, although clear on its face, may also be indefinite when a conflict or inconsistency between the claimed subject matter and the specification disclosure renders the scope of the claim uncertain as inconsistency with the specification disclosure or prior art teachings may make an otherwise definite claim take on an unreasonable degree of uncertainty. In re Moore, 439 F.2d 1232, 1235-36, 169 USPQ 236, 239 (CCPA 1971); In re Cohn, 438 F.2d 989, 169 USPQ 95 (CCPA 1971); In re Hammack, 427 F.2d 1378, 166 USPQ 204 (CCPA 1970).”
Claim 11 is also rejected under 35 U.S.C. 112(b) for respectively depending from rejected claim 10 for reasons discussed above.
Claim 17 is also rejected under 35 U.S.C. 112(b) for respectively depending from rejected claim 16 for reasons discussed above.
For the purposes of applying prior art, claims 8, 10-11, and 15-17 will be read as requiring that each imaging device includes a separate sensor that detects external brightness.
Clarification is required.
Closest Prior Art
The prior art (cited on PTO-892) is considered pertinent to applicant's disclosure. Among these, the following references are considered to be the closest, collectively disclosing the state of the art concerned with applying common photographing parameters in the field of 3D imaging using a configuration of multiple imaging devices.
CHARLES (GB 2535742) – applied to 35 USC 103 rejection, (English translation provided with previous office action), see Fig. 1-2, page 9-21, and ABSTRACT.
MUTO (JP 2006217357) – applied to 35 USC 103 rejection, (English translation provided with previous office action), see Fig. 3, para [0009-0011, 0031-35, 0103-105].
SHIOZAKI (US 20120044373) – applied to 35 USC 103 rejection, see Fig. 1-3, para [0018, 0032, 0043, 0045, 0059-0061, 0076-0077]. See para [0005] in view of para [0061], SHIOZAKI aims to solve the problem when the images of the same object are captured using a plurality of image capturing apparatuses, an unnatural impression in generating a three-dimensional image occurs due to differing parameters amongst the image capturing apparatuses.
SHIOZAKI discloses a 3D imaging system (Fig. 1, para [0005, 0061]) configured with a plurality of imaging devices (Fig. 1: master camera 10 & slave cameras 11-12 which is not limited to number of cameras shown, para [0018]) for capturing the same subject (Fig. 1: object 20, para [0005, 0061]), each camera 10-12 (Fig. 2) having memory (124/127), processor (123/140), and sensor for detecting brightness (image sensor 121 and light meter unit 142 used to measure external brightness, para [0019, 0025]) and output results (shot image information from each camera such as brightness, luminance, exposure, flash information, para [0045, 0061]) to a controller 120 such that common parameters can be calculated, output and synchronized for capture amongst the plural cameras 10-12 per Fig. 3: steps s100-s104 (para [0043-45]) and steps s110-s114 (para [0054-56]) for the motivated reason discussed in last sentence of para [0061] stating in performing image synthesis such as three-dimensional image acquisition, natural shot images can be obtained.
NAKANO (US 20140362246) – applied to 35 USC 103 rejection, see Abstract, Fig. 1-8 and para [0072, 0075-0078, 0083, 0085-86, 0132-137].
FLORES (US 20150306824) – see 3D imaging photography booth in Fig. 6-8, method in Fig. 10-12 and discussion in para [0097, 0101-103, 0106, 0118, 0125] which automatically determine group parameters to set to cameras 105 based on plural images 603 received from the cameras 105.
KATTA (US 20010019363) – applied to 35 USC 103 rejection, see Fig. 1, 4, & 7-8 and para [0281-311, 0356-359].
UCHIDA (US 20180131857) – see 3D imaging booth in Fig. 1A & 12, method in Fig. 9 and para [0022, 0025, 0036, 0041, 0046, 0050-53, 0069, 0090-91, 0097, 0101].
NOTE: Examiner welcomes INTERVIEW(s) to discuss the instant application’s claimed invention as it corresponds to the specification embodiments, as well as, discussing the similarities/differences taught/not taught by prior art. In the interest of compact prosecution, Applicant’s arguments/amendments should not only address the cited closest art applied/relied on in the 35 USC 103 rejection (below), but also address the other cited closest art not applied/relied on.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-2, 4-5, and 8-17 are rejected under 35 U.S.C. 103 as being unpatentable over CHARLES (GB 2535742) in view of SHIOZAKI (US 20120044373) in view of MUTO (JP 2006217357) -- hereafter, termed as shown “underlined”.
As per INDEPENDENT CLAIM 1, CHARLES teaches an imaging system (Fig. 1-2 and Abstract: 3D scanning apparatus/system 1 – see page 9, lines 33-35) comprising:
a plurality of imaging devices that include two or more imaging devices (Fig. 1-2: master imaging module 10 and slave imaging modules 11 are arranged in a circle to captures images of the same object/subject, wherein each module 10/11 include a vertical arrangement of plural digital cameras 21 – see page 11, lines 22-27; page 14, lines 21-29); and
a controller that communicates with the plurality of imaging devices (Fig. 1-2: control arrangement comprises central controller/computer and local controllers 37/49 which communicate with the plural cameras 21 – see page 12, lines 22-35; page 13, lines 1-31), wherein each of the plurality of imaging devices:
acquire, from the controller, a common set value regarding imaging, and perform a setting regarding the imaging based on the common set value (Central controller/computer and local controllers 37/49 automatically adjust the positions of the plural cameras 21 depending on the size, shape and position of the object to undergo 3D imaging AND issue a “common set value” command to the cameras 21 to capture images simultaneously, which is considered to be a brightness parameter associated with a sensor exposure time as the common set value to each camera – see page 11, 25-32; page 12, lines 22-35; page 13, lines 1-16; page 14, lines 21-30), each of the two or more imaging devices further includes:
a housing (Fig. 1-2: each camera 21 has a housing as shown for each camera 21 on master module in Fig. 5-7 and each camera 21 on slave module in Fig. 12-14); and a sensor (Each digital camera 21 is implied to have a sensor for capturing images – see page 14, lines 21-29 in view of page 13, lines 1-16).
Per the above citations, CHARLES discloses a central controller that communicates with respective local controllers/processors which respectively interfaces with the plural cameras 21 positionally grouped (i.e. by master module 10 & slave modules 11) 360 degrees at different angles around the same object, wherein the central controller outputs the common set value (i.e. sensor exposure time) to the totality of plural cameras 21, and the central controller receives captured images of the same object from different angles which are stored in memory and processed into a 3D image.
However, CHARLES is silent to each camera’s interior components with regards to a memory, a processor, and a sensor for detecting brightness, and thus does not meet the following limitations:
“wherein each of the plurality of imaging devices includes: one or more memories, and at least one processor each coupled to at least one of the one or more memories; a sensor that detects external brightness of the housing, wherein the processor of each of the two or more imaging devices is further configured to output information regarding a detection result of the sensor to the controller” -AND- “the controller is configured to: acquire the information regarding the detection result of the sensor of each of the two or more imaging devices, determine the common set value to be applied to the plurality of imaging devices based on the information regarding the detection result acquired by the acquisition unit, output the common set value to the plurality of imaging devices”.
Despite CHARLES’ shortcomings, the missing features underlined above were well known in the related field of 3D imaging configured with a plurality of imaging devices for capturing the same subject. For example, see prior art SHIOZAKI’s para [0005] which aims to solve the problem when the images of the same object are captured using a plurality of image capturing apparatuses, an unnatural impression in generating a three-dimensional image occurs due to differing parameters amongst the image capturing apparatuses.
SHIOZAKI discloses a 3D imaging system (Fig. 1, para [0005, 0061]) configured with a plurality of imaging devices (Fig. 1: master camera 10 & slave cameras 11-12 which is not limited to number of cameras shown, para [0018]) for capturing the same subject (Fig. 1: object 20, para [0005, 0061]), each camera 10-12 (Fig. 2) having memory (124/127), processor (123/140), and sensor for detecting brightness (image sensor 121 and light meter unit 142 used to measure external brightness, para [0019, 0025]) and output results (shot image information from each camera such as brightness, luminance, exposure, flash information, para [0045, 0061]) to a controller 120 such that common parameters can be calculated, output and synchronized for capture amongst the plural cameras 10-12 per Fig. 3: steps s100-s104 (para [0043-45]) and steps s110-s114 (para [0054-56]) for the motivated reason discussed in last sentence of para [0061] stating in performing image synthesis such as three-dimensional image acquisition, natural shot images can be obtained.
Thus, when considering the collective knowledge bestowed by each applied prior art, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to COMBINE the teachings of SHIOZAKI into suitable modification with the teachings of CHARLES to produce Applicant’s claimed invention with the structural arrangement / functional configuration stated in said underlined limitations above for the MOTIVATED REASON of producing a synthesized 3D image with a natural shot appearance in the analogous art of 3D imaging using images from a plurality of cameras positioned around the same target subject.
CHARLES in view of SHIOZAKI do not teach the concept of determining a common set value to be applied to one group of cameras amongst the total groups of cameras using information regarding the detection result of a camera’s sensor in the one group. Therefore, CHARLES in view of SHIOZAKI do not teach the following limitations (emphasis in bold):
“the controller is configured to: group the plurality of imaging devices into a plurality of groups; and determine the common set value to be applied to one group including a first number of imaging devices based on the information regarding the detection result of the sensor of each of a second number of imaging devices belonging to the one group”.
However, the missing features underlined above were well known in the related field of imaging with a plurality of cameras configured to apply a common set value. For example, see prior art MUTO’s Fig. 3 in view of para [0009-0011, 0031-35, 0103-105], discloses a central computer 7 groups the digital cameras 1 to 6 into a plurality of camera groups, such as first and second groups G1, G2, simultaneously transmits a control command to the digital cameras 1 to 6, belonging to the same group (G1 or G2), wherein cameras in the same group (G1 or G2) execute a common set value operation when simultaneously photographing an object. Per para [0032 & 0105], the computer 7 makes the grouping determination (i.e. G1 or G2) based on images and camera information (such as imaging mode and exposure) received from each of the digital cameras 1 to 6, wherein the images and exposure information would at least be from each camera’s sensor being indicative of environmental conditions for imaging the subject and considered to be a “detection result that includes external brightness”, which the processor 7 uses to determine a common set value such as exposure time, shutter speed, and F value to be applied to the plural cameras in one group (G1 or G2) per para [0035, 0104-0105].
Thus, when considering the collective knowledge bestowed by each applied prior art, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to COMBINE the teachings of MUTO into suitable modification with the teachings of CHARLES in view of SHIOZAKI to produce Applicant’s claimed invention with the structural arrangement / functional configuration stated in said underlined limitation above for the MOTIVATED REASON of producing a 3D image with a natural shot appearance and consistent exposure in the analogous art of imaging with a plurality of cameras configured to apply a common photographing parameter.
Furthermore, given the prior art combination discussed above, the following limitation…
“the second number being more than or equal to 2, and is less than or equal to the first number, each of the first number of imaging devices and each of the second number of imaging devices capture images of a same subject from different angles”,
…would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention because the prior art CHARLES discloses any number of cameras may be used in a 3D scanning photographing system (Abstract) with a plurality of cameras 21 (Fig. 1-2) arranged vertically and positioned 360 degrees in a circle around a same subject (page 10, lines 27-35) for capturing images of the same subject from different angles for the motivated reason of choosing a number of cameras that best suits the size/shape of the object to be 3D scanned (page 11, lines 25-27) in the analogous art of photographing an object an with a plurality of digital cameras.
As per CLAIM 2, CHARLES in view of SHIOZAKI in view of MUTO teaches the imaging system according to claim 1, wherein the common set value includes a brightness set value regarding brightness of an image generated by the imaging in each of the plurality of imaging devices (See prior art combination discussed in claim 1 over teachings of CHARLES’ 3D photo booth in Fig. 1-2 taken with SHIOZAKI’s teachings in Fig. 1-3 in view of para [0025, 0040, 0043-45, 0054-56, 0061] which use brightness in shot image to determine a common exposure setting that includes brightness; AND MUTO’s teachings in Fig. 3 in view of para [0009-0011, 0031-35, 0103-105]: images and camera information (such as imaging mode and exposure) received from each of the digital cameras 1 to 6 is used to determine a common setting such as imaging time, exposure, shutter speed and F-value).
As per CLAIM 4, CHARLES in view of SHIOZAKI in view of MUTO teaches the imaging system according to claim 1, wherein the at least one processor determines, as the information regarding the detection result of the sensor, a temporary set value regarding the imaging based on the detection result of the sensor, and the controller is further configured to determine the common set value based on the temporary set value determined by each of the two or more imaging devices (Given prior art combination discussed in claim 1, this feature is considered obvious over teachings of CHARLES’ 3D photo booth in Fig. 1 taken with SHIOZAKI’s teachings in Fig. 1-3 to use plural cameras to capture a 3D image, each camera having adjustable exposure settings for shutter 144 (shutter speed), diaphragm 211, sensor exposure time per para [0025, 0033, 0061] wherein the camera controller(s) 120/140 may use initial settings “temporary set value” that include exposure condition settings to determine a common set value per para [0043-45, 0054-56, 0061]; AND further view of MUTO’s teachings in Fig. 3 in view of para [0009-0011, 0031-35, 0103-105] that central computer 7 groups the plural digital cameras based on camera information such as imaging mode and exposure AND imaging conditions such as exposure time, shutter speed, and F value to be applied to the plural cameras in one group (G1 or G2) per para [0035, 0103-0105]).
As per CLAIM 5, CHARLES in view of SHIOZAKI in view of MUTO teaches the imaging system according to Claim 4 but remains silent to: “wherein the controller is further configured to determine, as the common set value, a mode value of two or more temporary set values determined by the two or more imaging devices”.
However, Official Notice (MPEP § 2144.03) is taken that both the concepts and advantages of using a value that occurs most “mode value” as the common set value is well known and expected in the art. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use a common set value based on a “mode value” of two or more temporary set values determined by the two or more imaging devices for the MOTIVATED REASON of producing a 3D image with a natural shot appearance and consistent exposure in the analogous art of imaging with a plurality of cameras configured to apply a common photographing parameter.
As per CLAIM 8, CHARLES in view of SHIOZAKI in view of MUTO teaches the imaging system according to claim 1, wherein each of the plurality of imaging devices includes the sensor, and the controller groups the plurality of imaging devices into the plurality of groups based on the information regarding the detection result of the sensor (In view of the prior art combination discussed in claim 1 over teachings of CHARLES’ 3D photo booth in Fig. 1-2 taken with SHIOZAKI’s teachings in Fig. 1-3 in view of para [0025, 0040, 0043-45, 0054-56, 0061] which use brightness in shot image “detection result” to determine a common exposure setting; AND MUTO’s teachings per para [0032 & 0105], the computer 7 makes the grouping determination (i.e. G1 or G2) based on images and camera information (such as imaging mode and exposure) received from each of the digital cameras 1 to 6, wherein the images and exposure information would at least be from each camera’s sensor being indicative of environmental conditions for imaging the subject and considered to be a “detection result that includes external brightness”, which the processor 7 uses to determine a common set value such as exposure time, shutter speed, and F value to be applied to the plural cameras in one group (G1 or G2) per para [0035, 0104-0105]).
As per CLAIM 9, CHARLES in view of SHIOZAKI in view of MUTO teaches the imaging system according to claim 1, wherein the controller groups the plurality of imaging devices into the plurality of groups based on a position of each of the plurality of imaging devices (This feature is obvious over CHARLES’ 3D photo booth in Fig. 1-2 which discloses a central controller that communicates with respective local controllers/processors to automatically control “positions” of plural cameras 21 positionally grouped (i.e. by master module 10 & slave modules 11) 360 degrees at different angles around the same object based on the size, shape and position of the object, page 13, lines 1-16 such that the total cameras’ angles of view fully cover the vertical and horizontal regions of the object in order to successfully produce a 3D scanned image – see pages 15-19. -- Furthermore, grouping based on position is obvious over additional teachings of SHIOZAKI in view of MUTO for motivated reason to produce a 3D in-focus sharp image with a natural shot appearance. SHIOZAKI (para [0061-63]) teaches using the information of the distances to the in-focus positions in all of the cameras 10-12, to determine each depth of field. Using this, each of the cameras 10-12 can be adjusted (i.e. adjusting diaphragm f value & lens focal length) such that all of the cameras 10-12 come into focus at the in-focus positions to produce a 3D in-focus sharp image with a natural shot appearance; wherein MUTO’s teachings per para [0032 & 0104-105], the computer 7 makes the grouping determination based on images and camera information (such as imaging mode and exposure) received from each of the digital cameras 1 to 6 to set a common setting that includes diaphragm f value).
As per CLAIM 10, CHARLES in view of SHIOZAKI in view of MUTO teaches the imaging system according to claim 1, wherein each of the plurality of imaging devices includes: the sensor, wherein the processor is further configured to determine a temporary set value regarding the imaging as the information regarding the detection result of the sensor based on the detection result of the sensor, and the controller groups the plurality of imaging devices into a plurality of groups based on the temporary set value determined by of each of the plurality of imaging devices (Given prior art combination discussed in claim 1, this feature is considered obvious over teachings of CHARLES’ 3D photo booth in Fig. 1 taken with SHIOZAKI’s teachings in Fig. 1-3 to use plural cameras to capture a 3D image, each camera having adjustable exposure settings for shutter 144 (shutter speed), diaphragm 211, sensor exposure time per para [0025, 0033, 0061] wherein the camera controller(s) 120/140 may use initial settings “temporary set value” that include exposure condition settings to determine a common set value per para [0043-45, 0054-56, 0061]; AND further view of MUTO’s teachings in Fig. 3 in view of para [0009-0011, 0031-35, 0103-105] that central computer 7 groups the plural digital cameras based on images and camera information such as imaging mode and exposure AND imaging conditions such as exposure time, shutter speed, and F value to be applied to the plural cameras in one group (G1 or G2) per para [0035, 0103-0105]).
As per CLAIM 11, CHARLES in view of SHIOZAKI in view of MUTO teaches the imaging system according to claim 10, wherein the processor is further configured to determine a temporary set value regarding a shutter speed of the imaging as the temporary set value regarding the imaging, and the controller groups the plurality of imaging devices into the plurality of groups based on the temporary set value regarding the shutter speed of the imaging (Given prior art combination discussed in claim 1, this feature is considered obvious over teachings of CHARLES’ 3D photo booth in Fig. 1 taken with SHIOZAKI’s teachings in Fig. 1-3 to use plural cameras to capture a 3D image, each camera having adjustable exposure settings for shutter 144 (shutter speed), diaphragm 211, sensor exposure time per para [0025, 0033, 0061] wherein the camera controller(s) 120/140 may use initial settings “temporary set value” that include exposure condition settings per para [0043-45, 0054-56, 0061]; AND further view of MUTO’s teachings in Fig. 3 in view of para [0009-0011, 0031-35, 0103-105] that central computer 7 groups the plural digital cameras based on camera information such as imaging mode and exposure AND imaging conditions such as exposure time, shutter speed, and F value to be applied to the plural cameras in one group (G1 or G2) per para [0035, 0103-0105]).
As per CLAIM 12, CHARLES in view of SHIOZAKI in view of MUTO teaches the imaging system according to Claim 1, wherein the controller further configured to perform adjustment of brightness of each of a plurality of illumination devices that illuminate a space to be captured by the plurality of imaging devices based on the information regarding the detection result (Given prior art combination discussed in claim 1, this feature is considered obvious over CHARLES’ teachings of a 3D photo booth and controller for adjusting position/orientation of a plurality of cameras 21 which may also be mounted with a plurality of illumination devices 24/39 – see Fig. 5, 9 & 12 in view of page 14, line 30 – page 15, line 13 and page 21, lines 5-12; taken in combination with SHIOZAKI’s teachings in Fig. 1-3 in view of para [0025, 0040, 0032, 0043-45, 0054-56, 0061]: each camera has a strobe flash 300 with controllable “light emission amount” or “light emission timing”, para [0040], wherein the common parameter includes a flash setting, para [0043, 0061] for motivated reason of capturing natural shot images with consistent exposure / brightness in the analogous art of 3D imaging with plural cameras).
As per INDEPENDENT CLAIM 13, CHARLES in view of SHIOZAKI in view of MUTO teaches “an imaging system comprising: a plurality of imaging devices that include two or more imaging devices; and a controller that communicates with the plurality of imaging devices, wherein: each of the two or more imaging devices includes:
a housing, a sensor that detects external brightness of the housing, one or more memories; and at least one processor each coupled to at least one of the one or more memories and configured to output information regarding a detection result of the sensor to the controller, the controller is configured to: acquire the information regarding the detection result of the sensor of each of the two or more imaging devices, group the plurality of imaging devices into a plurality of groups; and determine the common set value to be applied to one group including a first number of imaging devices based on the information regarding the detection result of the sensor of each of a second number of imaging devices belonging to the one group, the second number being more than or equal to 2, and is less than or equal to the first number, and each of the first number of imaging devices and each of the second number of imaging devices capture images of a same subject from different angles” (The underlined limitations recited for the “controller” of CLAIM 15 are rejected for the same reasons over prior art combination (CHARLES, SHIOZAKI & MUTO) discussed for similar limitations recited by imaging system of CLAIM 1 taken with dependent CLAIM 12).
As per CLAIM 14, CHARLES in view of SHIOZAKI in view of MUTO teaches a 3D model generation system comprising: the imaging system according to claim 1, wherein the 3D model generation system is configured to generate a 3D model of an imaging target of the plurality of imaging devices by using pieces of information on a plurality of images generated by the plurality of imaging devices of the imaging system (In view of the prior art combination discussed in claim 1, one of ordinary skill in the art could have easily derived this feature since it has been taught by CHARLES, the camera groups are utilized in a 3-D photo booth as shown in Figures 1-2, page 9, and Abstract in order to obtain a 3D image of a same object i.e. human or animal. Also - SHIOZAKI’s teachings are applied for image synthesis of the plural images of same subject 20 from plural cameras 10-12 (Fig. 1) to produce a 3D image, para [0061] in view of para [0005]).
As per INDEPENDENT CLAIM 15, CHARLES in view of SHIOZAKI in view of MUTO teaches: “a controller coupled to at least one of one or more memories and configured to perform operations comprising: acquiring information regarding a detection result of a sensor that detects external brightness, the sensor being included in each of two or more imaging devices among a plurality of imaging devices; determining a common set value regarding imaging based on the information regarding the detection result, the common set value being applied to the plurality of imaging devices; outputting the common set value to the plurality of imaging devices; grouping the plurality of imaging devices into a plurality of groups; and determining the common set value to be applied to one group including a first number of imaging devices based on the information regarding the detection result of the sensor of each of a second number of imaging devices belonging to the one group, the second number being more than or equal to 2, and is less than or equal to the first number, wherein each of the first number of imaging devices and each of the second number of imaging devices capture images of a same subject from different angles” (The underlined limitations recited for the “controller” of CLAIM 15 are rejected for the same reasons over prior art combination (CHARLES, SHIOZAKI & MUTO) discussed for similar limitations recited by imaging system of CLAIM 1).
As per INDEPENDENT CLAIM 16, CHARLES in view of SHIOZAKI in view of MUTO teaches: “a method comprising: acquiring information regarding a detection result of a sensor that detects external brightness, the sensor being included in each of two or more imaging devices among a plurality of imaging devices; determining a common set value regarding imaging based on the information regarding the detection result acquired by the acquiring, the common set value being applied to the plurality of imaging devices; outputting the common set value determined by the determining to the plurality of imaging devices; grouping the plurality of imaging devices into a plurality of groups; and determining the common set value to be applied to one group including a first number of imaging devices based on the information regarding the detection result of the sensor of each of a second number of imaging devices belonging to the one group, the second number being more than or equal to 2, and is less than or equal to the first number, wherein each of the first number of imaging devices and each of the second number of imaging devices capture images of a same subject from different angles” (The underlined limitations recited for the “method” of CLAIM 16 are rejected for the same reasons over prior art combination (CHARLES, SHIOZAKI & MUTO) discussed for similar limitations recited by imaging system of CLAIM 1).
As per CLAIM 17, CHARLES in view of SHIOZAKI in view of MUTO teaches the non-transitory computer readable medium storing a program causing one or more processors (CHARLES’ controller may be a programmable processor(s) executing a software method per page 13, lines 1-32 and page 14, lines 21-29) to execute the method according to claim 16 (This claim is rejected for the same reasons over prior art combination (CHARLES, SHIOZAKI & MUTO) discussed for features taught by the imaging system of CLAIM 1 and 16).
Claims 3 and 6 are rejected under 35 U.S.C. 103 as being unpatentable over CHARLES (GB 2535742) in view of SHIOZAKI (US 20120044373) in view of MUTO (JP 2006217357) in view of KATTA (US 20010019363) -- hereafter, termed as shown “underlined”.
As per CLAIM 3, CHARLES in view of SHIOZAKI in view of MUTO teaches the imaging system according to claim 1 but remains silent to: “wherein the common set value includes a color set value regarding a color of an image generated by the imaging in each of the plurality of imaging devices”.
However, this feature is considered obvious, for example, see related prior art KATTA, para [0280-0311], which teaches an imaging system using a plurality of cameras (Fig. 1-2: image pickup devices 1), wherein a controller 3 determines a common image quality parameter (para [0281, 0304, 0311]) to apply to each camera (Fig. 4) for capturing respective images used to produce a merged image with consistent image quality (Fig. 7-8), wherein the common image quality parameter may be color tint per para [0281, 0304].
Thus, when considering the collective knowledge bestowed by each applied prior art, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to COMBINE the teachings of KATTA into suitable modification with the teachings of CHARLES in view of SHIOZAKI in view of MUTO to produce Applicant’s claimed invention with the structural arrangement / functional configuration stated in said underlined limitation for the MOTIVATED REASON of producing a 3D image with consistent/natural image quality in the analogous art of an imaging system using a plurality of cameras.
As per CLAIM 6, CHARLES in view of SHIOZAKI in view of MUTO teaches the imaging system according to claim 4 but remains silent to: “wherein the controller is further configured to determine, as the common set value, an average value of two or more temporary set values determined by the two or more imaging devices”.
However, this feature is considered obvious, for example, see related prior art KATTA, para [0280-0311], which teaches an imaging system using a plurality of cameras (Fig. 1-2: image pickup devices 1), wherein a controller 3 determines a common image quality parameter (para [0281, 0304, 0311]) to apply to each camera (Fig. 4) for capturing respective images used to produce a merged image with consistent image quality (Fig. 7-8), wherein the control device 3 calculates an average for the image quality parameters set in each of the image pickup devices 1, and controls the image pickup devices 1 for shared use of the average value (or approximate value) as the image quality parameter per para [0288].
Thus, when considering the collective knowledge bestowed by each applied prior art, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to COMBINE the teachings of KATTA into suitable modification with the teachings of CHARLES in view of SHIOZAKI in view of MUTO to produce Applicant’s claimed invention with the structural arrangement / functional configuration stated in said underlined limitation for the MOTIVATED REASON of producing a 3D image with consistent/natural image quality in the analogous art of an imaging system using a plurality of cameras.
Claims 1-4, 8-11 and 14-17 are rejected under 35 U.S.C. 103 as being unpatentable over CHARLES (GB 2535742) in view of NAKANO (US 20140362246) in view of MUTO (JP 2006217357) -- hereafter, termed as shown “underlined”.
As per INDEPENDENT CLAIM 1, CHARLES teaches an imaging system (Fig. 1-2 and Abstract: 3D scanning apparatus/system 1 – see page 9, lines 33-35) comprising:
a plurality of imaging devices that include two or more imaging devices (Fig. 1-2: master imaging module 10 and slave imaging modules 11 are arranged in a circle to captures images of the same object/subject, wherein each module 10/11 include a vertical arrangement of plural digital cameras 21 – see page 11, lines 22-27; page 14, lines 21-29); and
a controller that communicates with the plurality of imaging devices (Fig. 1-2: control arrangement comprises central controller/computer and local controllers 37/49 which communicate with the plural cameras 21 – see page 12, lines 22-35; page 13, lines 1-31), wherein each of the plurality of imaging devices:
acquire, from the controller, a common set value regarding imaging, and perform a setting regarding the imaging based on the common set value (Central controller/computer and local controllers 37/49 automatically adjust the positions of the plural cameras 21 depending on the size, shape and position of the object to undergo 3D imaging AND issue a “common set value” command to the cameras 21 to capture images simultaneously, which is considered to be a brightness parameter associated with a sensor exposure time as the common set value to each camera – see page 11, 25-32; page 12, lines 22-35; page 13, lines 1-16; page 14, lines 21-30), each of the two or more imaging devices further includes:
a housing (Fig. 1-2: each camera 21 has a housing as shown for each camera 21 on master module in Fig. 5-7 and each camera 21 on slave module in Fig. 12-14); and a sensor (Each digital camera 21 is implied to have a sensor for capturing images – see page 14, lines 21-29 in view of page 13, lines 1-16).
Per the above citations, CHARLES discloses a central controller that communicates with respective local controllers/processors which respectively interfaces with the plural cameras 21 positionally grouped (i.e. by master module 10 & slave modules 11) 360 degrees at different angles around the same object, wherein the central controller outputs the common set value (i.e. sensor exposure time) to the totality of plural cameras 21, and the central controller receives captured images of the same object from different angles which are stored in memory and processed into a 3D image.
However, CHARLES is silent to each camera’s interior components with regards to a memory, a processor, and a sensor for detecting brightness, and thus does not meet the following limitations:
“wherein each of the plurality of imaging devices includes: one or more memories, and at least one processor each coupled to at least one of the one or more memories; a sensor that detects external brightness of the housing, wherein the processor of each of the two or more imaging devices is further configured to output information regarding a detection result of the sensor to the controller” -AND- “the controller is configured to: acquire the information regarding the detection result of the sensor of each of the two or more imaging devices, determine the common set value to be applied to the plurality of imaging devices based on the information regarding the detection result acquired by the acquisition unit, output the common set value to the plurality of imaging devices”.
Despite CHARLES’ shortcomings, the missing features underlined above were well known in the related field of panorama imaging configured with a plurality of imaging devices for capturing the same subject. For example, see prior art NAKANO’s para [0075] which aims to solve the problem when the images of the same object are captured using a plurality of image capturing apparatuses, there exists a sense of incompatibility and unnatural appearance in generating a panorama image due to differences in brightness amongst the captured image frames received from the plural cameras.
NAKANO discloses a panorama imaging system (Fig. 1) configured with a plurality of imaging devices (Fig. 1B: master camera 1B & slave cameras 1A & 1C-1F) for capturing the same subject (Fig. 1: object 30, para [0004, 0030, 0137]), each camera 1A-1F (Fig. 2) having memory (13), processor (3/8), and sensor for detecting brightness (image sensor 4 detects photographing conditions / state or temporary conditions such as brightness in image, para [0031, 0034-35, 0075, 0085-86, 0132-133]) and output results (photographing conditions / state or temporary conditions from each camera such as brightness, shade of color, exposure time, diaphragm, blur, focus, para [0075, 0083, 0085-86]) to a controller 3 such that common parameters can be calculated, output and synchronized for capture amongst the plural cameras 1A-1F per FIG. 3, FIG. 4 (steps SB1-SB3, SB11-SB12) and FIG. 5 (steps SC4, SC7) for the motivated reason discussed in para [0075] stating “once the brightness of the image at the time of photographing is determined in Step SC4, all of the cameras 1 execute the moving image photographing with the same brightness. Therefore, when frame images photographed by the respective cameras 1 at the same time point are combined to create a panorama image, there is no difference in brightness among the frame images and a high-quality natural panorama image without a sense of incompatibility can be obtained”.
Thus, when considering the collective knowledge bestowed by each applied prior art, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to COMBINE the teachings of NAKANO into suitable modification with the teachings of CHARLES to produce Applicant’s claimed invention with the structural arrangement / functional configuration stated in said underlined limitations above for the MOTIVATED REASON of producing a high quality 3D image with a natural appearance in the analogous art of imaging with a plurality of cameras configured to apply a common photographing setting.
CHARLES in view of NAKANO do not teach the concept of determining a common set value to be applied to one group of cameras amongst the total groups of cameras using information regarding the detection result of a camera’s sensor in the one group. Therefore, CHARLES in view of NAKANO do not teach the following limitations (emphasis in bold):
“the controller is configured to: group the plurality of imaging devices into a plurality of groups; and determine the common set value to be applied to one group including a first number of imaging devices based on the information regarding the detection result of the sensor of each of a second number of imaging devices belonging to the one group”.
However, the missing features underlined above were well known in the related field of imaging with a plurality of cameras configured to apply a common set value. For example, see prior art MUTO’s Fig. 3 in view of para [0009-0011, 0031-35, 0103-105], discloses a central computer 7 groups the digital cameras 1 to 6 into a plurality of camera groups, such as first and second groups G1, G2, simultaneously transmits a control command to the digital cameras 1 to 6, belonging to the same group (G1 or G2), wherein cameras in the same group (G1 or G2) execute a common set value operation when simultaneously photographing an object. Per para [0032 & 0105], the computer 7 makes the grouping determination (i.e. G1 or G2) based on images and camera information (such as imaging mode and exposure) received from each of the digital cameras 1 to 6, wherein the images and exposure information would at least be from each camera’s sensor being indicative of environmental conditions for imaging the subject and considered to be a “detection result that includes external brightness”, which the processor 7 uses to determine a common set value such as exposure time, shutter speed, and F value to be applied to the plural cameras in one group (G1 or G2) per para [0035, 0104-0105].
Thus, when considering the collective knowledge bestowed by each applied prior art, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to COMBINE the teachings of MUTO into suitable modification with the teachings of CHARLES in view of NAKANO to produce Applicant’s claimed invention with the structural arrangement / functional configuration stated in said underlined limitation above for the MOTIVATED REASON of producing a 3D image with a natural shot appearance and consistent exposure in the analogous art of imaging with a plurality of cameras configured to apply a common photographing setting.
Furthermore, given the prior art combination discussed above, the following limitation…
“the second number being more than or equal to 2, and is less than or equal to the first number, each of the first number of imaging devices and each of the second number of imaging devices capture images of a same subject from different angles”,
…would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention because the prior art CHARLES discloses any number of cameras may be used in a 3D scanning photographing system (Abstract) with a plurality of cameras 21 (Fig. 1-2) arranged vertically and positioned 360 degrees in a circle around a same subject (page 10, lines 27-35) for capturing images of the same subject from different angles for the motivated reason of choosing a number of cameras that best suits the size/shape of the object to be 3D scanned (page 11, lines 25-27) in the analogous art of photographing an object an with a plurality of digital cameras.
As per CLAIM 2, CHARLES in view of NAKANO in view of MUTO teaches the imaging system according to claim 1, wherein the common set value includes a brightness set value regarding brightness of an image generated by the imaging in each of the plurality of imaging devices (See prior art combination discussed in claim 1 over teachings of CHARLES’ 3D photo booth in Fig. 1-2 taken with NAKANO’s teachings in Fig. 1, Fig. 4-5, Fig. 7, Abstract, in view of para [0030-31, 0034-35, 0075, 0078, 0083, 0085-86, 0100, 0132-133, 0135, 0137] using controller 3 and image sensor 4 to determine photographing conditions / state or temporary conditions from each camera such as brightness, shade of color, exposure time, diaphragm, blur, focus, para [0075, 0083, 0085-86]) such that common parameters can be calculated, output and synchronized for capture amongst the plural cameras 1A-1F per FIG. 3, FIG.