Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) was submitted on 19 December, 2023. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Specification
The abstract of the disclosure is objected to because
line 5, "solid-state imaging device, acquire external parameters" appears to be missing an "and". Examiner suggests "solid-state imaging device, and acquire external parameters".
A corrected abstract of the disclosure is required and must be presented on a separate sheet, apart from any other text. See MPEP § 608.01(b).
The use of the term "Wi-Fi" (¶0055, line 4), which is a trade name or a mark used in commerce, has been noted in this application. The term should be accompanied by the generic terminology; furthermore the term should be capitalized wherever it appears or, where appropriate, include a proper symbol indicating use in commerce such as ™, SM , or ® following the term.
Although the use of trade names and marks used in commerce (i.e., trademarks, service marks, certification marks, and collective marks) are permissible in patent applications, the proprietary nature of the marks should be respected and every effort made to prevent their use in any manner which might adversely affect their validity as commercial marks.
The disclosure is objected to because of the following informalities:
¶0006, line 1, "embodiments, n the processing circuit", the "n" appears to be a typo.
¶0032, line 3, "point in the world coordinate system is represented by ," is missing what the representation is. Examiner suggests "[xw, yw, zw]" following the format given for the coordinate system later in the same sentence.
¶0042, line 3-4, "microsurfaces, for example. (for example, using the SurfelWrap method)" appears to be a typo. Maybe the Surfel example was meant to be included with the preceding sentence.
¶0090, line 4-7, "the processing circuit 104 stores parameters related to a trained model that generates a non-rigid model from human movements" is repeated twice.
Appropriate correction is required.
Claim Objections
Claims 7 and 9 are objected to because of the following informalities:
Claim 7, lines 1, 3, and 5, "a plurality of solid-state imaging device" should use the plural form of "devices".
Claim 9, line 1, "readable medium storing program which" is missing an article before "program". Examiner suggests "readable medium storing a program which".
Appropriate correction is required.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are:
“an imaging unit acquiring first image information and first depth information” in claim 1.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Regarding claim 1, “an imaging unit acquiring first image information and first depth information” will be interpreted under 35 U.S.C 112(f) as a camera or similar generic imaging device as described in ¶0029, line 1, “The first camera” and in Figure 1, #20A, #20B.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 4 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Regarding Claim 4, the limitation recites "the ego-solid-state imaging device" in line 4. There is insufficient antecedent basis for this limitation in the claim. For the purpose of examination, examiner will interpret "the ego-solid-state imaging device" as "the first solid-state imaging device".
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-9 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Independent claims 1, 8, and 9 recite the following limitations, with claim 1 being exemplary:
“(a) generate a first non-rigid model based on the first depth information; (b) fit the first non-rigid model and a second non-rigid model which is based on second depth information acquired by at least one other solid-state imaging device; and (c) acquire external parameters related to the other solid-state imaging device based on a fitting result.”
According to the USPTO guidelines, a claim is directed to non-statutory subject matter if:
STEP 1: the claim does not fall within one of the four statutory categories of invention (process, machine, manufacture or composition of matter), or
STEP 2: the claim recites a judicial exception, e.g. an abstract idea, without reciting additional elements that amount to significantly more than the judicial exception, as determined using the following analysis:
STEP 2A (PRONG 1): Does the claim recite an abstract idea, law of nature, or natural phenomenon?
STEP 2A (PRONG 2): Does the claim recite additional elements that integrate the judicial exception into a practical application?
STEP 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception?
Using the two-step inquiry, it is clear that the independent claim 1 is directed to an abstract idea as shown below:
STEP 1: Do the claims fall within one of the statutory categories? YES. Independent claims 1, 8, and 9 are directed to a device, device, and non-transitory computer readable medium, respectively.
STEP 2A (PRONG 1): Is the claim directed to a law of nature, a natural phenomenon, or an abstract idea? YES. Independent claims 1, 8, and 9 are directed towards a mathematical concept and a mental process (i.e. abstract ideas).
Regarding claims 1, 8, and 9, limitation (b) recites a mental process (see MPEP § 2106.04(a)(2)(III)) as a human mind could fit a first non-rigid model and a second non-rigid model by judging which parts of the first model align with which parts of the second model. Limitation (c) recites a mathematical process (see MPEP § 2106.04(a)(2)(I)) as acquiring external parameters refers to values of a transformation matrix.
STEP 2A (PRONG 2): Does the claim recite additional elements that integrate the judicial exception into a practical application? NO. Independent claims 1, 8, and 9 do not recite additional elements that integrate the judicial exception into a practical application.
Regarding claims 1, 8, and 9, limitation (a) recites “generate a first non-rigid model”. Claim 1 also recites “acquiring first image information and first depth information”. While these limitations are not necessarily abstract ideas, they are insignificant extra-solution activity since they are merely data gathering and output (see MPEP §2106.05(g)). Moreover, the use of images and depth information to acquire data about the structure of an object is well-understood, routine, and conventional activity, as described by Park et al. 2022, Sensors, Page 1, §1. Introduction, lines 1-4, “Recently, RGB-D sensors (cameras) combining RGB and depth sensors have become common and are widely used in various fields. The RGB-D camera helps to accurately and quickly extract the shape of an object and the 3D structure of the surrounding environment.”
Claim 1 and 8 further recite additional elements “a memory” and “a processor”. Claim 1 further recites additional element “an imaging unit”. Claim 9 further recites an additional element “non-transitory computer readable medium”. While the above-mentioned limitations from claims 1, 8, and 9 are additional elements, they are not sufficient to recite a practical application of the abstract ideas recited in claims 1, 8, and 9 as they amount to mere generic computer elements and thus amount to no more than a recitation of the words "apply it" (or an equivalent) or are no more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP §2106.05(f)).
STEP 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? NO. Independent claims 1, 8, and 9 do not recite additional elements that amount to significantly more than the judicial exception.
Regarding claims 1, 8, and 9, the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because when considered separately and in combination, the above recited additional elements from claims 1, 8, and 9 do not add significantly more (also known as an “inventive concept”) to the exception. Rather, the additional elements disclosed above perform well-understood, routine, and conventional activity, as described by Park et al. 2022, Sensors, Page 1, §1 Introduction, lines 1-4, “Recently, RGB-D sensors (cameras) combining RGB and depth sensors have become common and are widely used in various fields. The RGB-D camera helps to accurately and quickly extract the shape of an object and the 3D structure of the surrounding environment.”
Therefore, independent claims 1, 8, and 9 are directed towards an abstract idea without a practical application or significantly more.
Regarding claim 2: the additional limitations do not integrate the mental process into practical application or add significantly more to the mental process. The limitation: wherein the processor acquires the second non-rigid model from the at least one other solid-state imaging device falls under insignificant extra-solution activity of data gathering and output (see MPEP §2106.05(g)).
Regarding claim 3: the additional limitations do not integrate the mental process into practical application or add significantly more to the mental process. The limitation: wherein the processor acquires the second depth information from the at least one other solid-state imaging device and generates the second non-rigid model falls under insignificant extra-solution activity of data gathering and output (see MPEP §2106.05(g)).
Regarding claim 4: the additional limitations do not integrate the mental process into practical application or add significantly more to the mental process. The limitation: wherein the processor acquires external parameters for converting a camera coordinate system of the at least one other solid-state imaging device to a camera coordinate system in the ego-solid-state imaging device falls under mathematical concepts (i.e. abstract idea) (see MPEP §
2106.04(a)(2)(I)).
Regarding claim 5: the additional limitations do not integrate the mental process into practical application or add significantly more to the mental process. The claim recites the following limitations: (a) wherein the processor acquires the external parameters by fitting the first non-rigid model and the second non-rigid model, and (b) stores the external parameters in the memory. Limitation (a) falls under mathematical concepts (i.e. abstract idea) (see MPEP §
2106.04(a)(2)(I)), and limitation (b) falls under insignificant extra-solution activity of data gathering and output (see MPEP §2106.05(g)).
Regarding claim 6: the additional limitations do not integrate the mental process into practical application or add significantly more to the mental process. The limitation: wherein the processor inputs the first depth information into a trained model to generate the first non-rigid model is recited generally and falls under mere instructions to apply an exception (see MPEP §2106.05(f)).
Regarding claim 7: the additional limitations do not integrate the mental process into practical application or add significantly more to the mental process. The limitation: wherein at least one solid-state imaging device of the plurality of solid-state imaging device acquires external parameters regarding at least one other solid-state imaging device of the plurality of solid-state imaging device falls under mathematical concepts (i.e. abstract idea) (see MPEP § 2106.04(a)(2)(I)).
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim 8 and 9 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Park et al. 2022, "3D Static Point Cloud Registration by Estimating Temporal Human Pose at Multiview", Sensors, 22, 1097, (hereafter, "Park").
Regarding claim 8, Park discloses an information processing device comprising: a memory; and a processor (Page 7, §4.1 Environment, lines 4-5, Eight cameras are input to one workstation through an optical cable-type USB 3.0 interface. A person of ordinary skill in the art would understand a workstation to comprise a memory and processor) configured to: fit (Page 5, §3.1 Extrinsic Calibration, ¶1, lines 7-9, In the process of matching the joint set predicted by the two cameras, a coordinate transformation matrix between the two cameras is obtained, and the two cameras can be aligned based on one common world coordinate system. The process of matching the joint sets and aligning the cameras is being considered as “fitting”) a first non-rigid model (Abstract, lines 7-8, uses the joint coordinates of the 3D joint set obtained through pose estimation as feature points; Page 5, §3.1 Extrinsic Calibration, ¶3, line 1, A reference joint set is first selected among the joint sets. The 3D joint set is being considered as a “non-rigid model” and the reference set is considered the “first” model) based on a first depth information acquired by a first imaging device (Page 7-8, §4.2 3D Pose Estimation Result, ¶1, lines 5-6, Both methods can estimate 3D pose using depth in common; Page 7, Figure 8. Figure 8 shows a set of eight RGB images, corresponding depth images, and corresponding models from eight cameras. The reference model or “first” model is one of the eight, and the corresponding depth image and camera is the “first” depth information and imaging device) and a second non-rigid model based on a second depth information acquired by a second imaging device (Page 5, §3.1 Extrinsic Calibration, ¶3, lines 3-4, next, the target joint set is chosen in order that many joints overlap with the joints of the reference joint set. Page 7, Figure 8. The target set is considered the “second” model. The target set is one of the seven image/models sets in Figure 8 that does not correspond to the reference set. The images and camera corresponding to the target set are considered the “second” depth information and imaging device); and acquire external parameters of a second imaging device coordinate system at the second imaging device regarding a first camera coordinate system at the first imaging device based on a fitting result (Page 4, §2.2 Extrinsic Calibration, Eqns. 1-5, ¶2, lines 6-7, the parameters for converting those of other cameras to the reference coordinate system are obtained; Page 5, §3.1 Extrinsic Calibration, ¶1, lines 7-9, In the process of matching the joint set predicted by the two cameras, a coordinate transformation matrix between the two cameras is obtained; Page 5-6, §3.1 Extrinsic Calibration, ¶4, lines 9-11, the calibration for this camera can be temporally continued in the next frames until the extrinsic parameter of the corresponding camera is estimated; Page 5, §3.1 Extrinsic Calibration, ¶3, lines 9-10, the coordinate transformation parameters of the camera are obtained while matching two joint sets).
Regarding claim 9, Park discloses a non-transitory computer readable medium (Page 7, §4.1 Environment, lines 4-5, Eight cameras are input to one workstation through an optical cable-type USB 3.0 interface. A person of ordinary skill in the art would understand a workstation to comprise a non-transitory computer readable medium) storing program which causes a processor to execute: generate a first non-rigid model based on first depth information acquired by a first imaging system (Page 7, §4.2 3D Pose Estimation Result, ¶1, lines 2-3, We estimated human poses for two humans using two different methods (SDK of Azure Kinect, MediaPipe); Page 7-8, §4.2 3D Pose Estimation Result, ¶1, lines 5-6, Both methods can estimate 3D pose using depth in common; Page 7, Figure 8. Figure 8 shows a set of eight RGB images, corresponding depth images, and corresponding models from eight cameras. One set of images, model, and camera is the “first” depth information and imaging system); fit the first non-rigid model (Page 5, §3.1 Extrinsic Calibration, ¶3, line 1, A reference joint set is first selected among the joint sets. The reference set is considered the “first” model and correspond to the first image set above) and a second non-rigid model based on second depth information acquired by a second imaging system (Page 5, §3.1 Extrinsic Calibration, ¶3, lines 3-4, next, the target joint set is chosen in order that many joints overlap with the joints of the reference joint set. Page 7, Figure 8. The target set is considered the “second” model. The target set is one of the seven image/models sets in Figure 8 that does not correspond to the reference set. The images and camera corresponding to the target set are considered the “second” depth information and imaging device); and obtain external parameters of the first imaging system with respect to the second imaging system based on a fitting result (Page 4, §2.2 Extrinsic Calibration, Eqns. 1-5, ¶2, lines 6-7, the parameters for converting those of other cameras to the reference coordinate system are obtained; Page 5, §3.1 Extrinsic Calibration, ¶1, lines 7-9, In the process of matching the joint set predicted by the two cameras, a coordinate transformation matrix between the two cameras is obtained; Page 5-6, §3.1 Extrinsic Calibration, ¶4, lines 9-11, the calibration for this camera can be temporally continued in the next frames until the extrinsic parameter of the corresponding camera is estimated; Page 5, §3.1 Extrinsic Calibration, ¶3, lines 9-10, the coordinate transformation parameters of the camera are obtained while matching two joint sets).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim 1 and 3-7 are rejected under 35 U.S.C. 103 as being unpatentable over Park in view of Li et al. (US Patent Application Publication No. 20240163416) (Hereafter, "Li").
Regarding claim 1, Park discloses an imaging unit acquiring first image information and first depth information (Page 5 §3.1 Extrinsic Calibration, ¶1, lines 3-4, This paper uses an Azure Kinect as the RGB-D sensor; Page 1 §1 Introduction, ¶1, lines 1-3, The RGB-D camera helps to accurately and quickly extract the shape of an object and the 3D structure of the surrounding environment; Page 5-6 §3.1 Extrinsic Calibration, ¶5, lines 1-2, Considering the case of two cameras, 3D human pose is estimated from two 3D sensors; Page 5, §3.1 Extrinsic Calibration, ¶3, line 1, A reference joint set is first selected among the joint sets. Page 7 Figure 8. The image and depth data corresponding to the reference joint set is being considered as the “first” image and depth information and corresponds to one set of RGB and depth images from Figure 8); a memory; and a processor (Page 7 §4.1 Environment, lines 4-5, Eight cameras are input to one workstation through an optical cable-type USB 3.0 interface. A person of ordinary skill in the art would understand a workstation to comprise a memory and processor) configured to: generate a first non-rigid model based on the first depth information (Page 7 §4.2 3D Pose Estimation Result, ¶1, lines 2-3, We estimated human poses for two humans using two different methods (SDK of Azure Kinect, MediaPipe); Page 7-8 §4.2 3D Pose Estimation Result, ¶1, lines 5-6, Both methods can estimate 3D pose using depth in common; Page 5-6 §3.1 Extrinsic Calibration, ¶5, lines 1-2, Considering the case of two cameras, 3D human pose is estimated from two 3D sensors; Abstract, lines 7-8, uses the joint coordinates of the 3D joint set obtained through pose estimation as feature points. The 3D joint set is being considered as a “non-rigid model”); fit the first non-rigid model (Page 5, §3.1 Extrinsic Calibration, ¶3, line 1, A reference joint set is first selected among the joint sets. The reference set is considered the “first” model) and a second non-rigid model which is based on second depth information acquired by at least one other solid-state imaging device (Page 5, §3.1 Extrinsic Calibration, ¶3, lines 3-4, next, the target joint set is chosen in order that many joints overlap with the joints of the reference joint set. Page 7 Figure 8. The target set is considered the “second” model. The target set is one of the seven image/models sets in Figure 8 that does not correspond to the reference set. The images and camera corresponding to the target set are considered the “second” depth information and imaging device); and acquire external parameters related to the other solid-state imaging device based on a fitting result (Page 4, §2.2 Extrinsic Calibration, Eqns. 1-5, ¶2, lines 6-7, the parameters for converting those of other cameras to the reference coordinate system are obtained; Page 5, §3.1 Extrinsic Calibration, ¶1, lines 7-9, In the process of matching the joint set predicted by the two cameras, a coordinate transformation matrix between the two cameras is obtained; Page 5-6, §3.1 Extrinsic Calibration, ¶4, lines 9-11, the calibration for this camera can be temporally continued in the next frames until the extrinsic parameter of the corresponding camera is estimated; Page 5, §3.1 Extrinsic Calibration, ¶3, lines 9-10, the coordinate transformation parameters of the camera are obtained while matching two joint sets).
Park fails to disclose a solid-state imaging device.
However, Li discloses a solid-state imaging device (Fig 34; ¶304, lines 1-2, FIG. 34 is a block diagram of a device 300 for photographing according to an embodiment; ¶305, lines 1-6, Referring to FIG. 34, the device 300 may include one or more of the following components: a processing component 302, a memory 304, a power component 306, a multimedia component 308, an audio component 310, an input/output (I/O) interface 312, a sensor component 314, and a communication component 316; ¶312, lines 12-15, The sensor component 314 may also include an optical sensor, such as a CMOS or CCD image sensor, for use in imaging applications).
Both Park and Li are analogous to the claimed invention because both are in the field of camera calibration. It would have been obvious to a person of ordinary skill before the effective filing date of the claimed invention to incorporate the solid-state imaging device of Li into camera calibration system of Park. The suggestion/motivation for doing so would have been that the substitution (see MPEP §2143(I)(B)) of a solid-state imaging device in place of a standalone camera and processing device would have been obvious to a person of ordinary skill in the art (Fig 28; ¶255, lines 1-5, FIG. 28 is a block diagram of a photographing device 100 applied to a first photographing apparatus according to an embodiment. Referring to FIG. 28, the device includes an acquiring unit 101 and a processing unit 102; ¶256, lines 1-2, The acquiring unit 101 is configured to acquire images from one or more second photographing apparatuses. Li provides alternative an embodiment where the camera and the processor are standalone units indicating they are interchangeable).
This method of improving Park was within the ordinary ability of one of ordinary skill in the art based on the teachings of Li.
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date, to modify Park with the teachings of Li to obtain the invention as specified in claim 1.
Regarding claim 3, Park in view of Li discloses the solid-state imaging device according to claim 1. Park further discloses wherein the processor acquires the second depth information from the at least one other solid-state imaging device (Page 5-6 §3.1 Extrinsic Calibration, ¶5, lines 1-2, Considering the case of two cameras, 3D human pose is estimated from two 3D sensors) and generates the second non-rigid model (Page 5, §3.1 Extrinsic Calibration, ¶3, lines 3-4, next, the target joint set is chosen in order that many joints overlap with the joints of the reference joint set. Page 7 Figure 8. The target set is considered the “second” model and the image and depth information corresponding to the target set is considered the “second” depth information).
Regarding claim 4, Park in view of Li discloses the solid-state imaging device according to claim 1. Park further discloses wherein the processor acquires external parameters for converting a camera coordinate system of the at least one other solid-state imaging device to a camera coordinate system in the ego-solid-state imaging device (Page 5, §3.1 Extrinsic Calibration, ¶1, lines 7-9, In the process of matching the joint set predicted by the two cameras, a coordinate transformation matrix between the two cameras is obtained, and the two cameras can be aligned based on one common world coordinate system; Page 4, §2.2 Extrinsic Calibration, Eqns. 1-5, ¶2, lines 6-7, the parameters for converting those of other cameras to the reference coordinate system are obtained).
Regarding claim 5, Park in view of Li discloses the solid-state imaging device according to claim 4. Park further discloses wherein the processor acquires the external parameters by fitting the first non-rigid model and the second non-rigid model (Page 4, §2.2 Extrinsic Calibration, Eqns. 1-5, ¶2, lines 6-7, the parameters for converting those of other cameras to the reference coordinate system are obtained; Page 5, §3.1 Extrinsic Calibration, ¶1, lines 7-9, In the process of matching the joint set predicted by the two cameras, a coordinate transformation matrix between the two cameras is obtained; Page 5-6, §3.1 Extrinsic Calibration, ¶4, lines 9-11, the calibration for this camera can be temporally continued in the next frames until the extrinsic parameter of the corresponding camera is estimated; Page 5, §3.1 Extrinsic Calibration, ¶3, lines 9-10, the coordinate transformation parameters of the camera are obtained while matching two joint sets), and stores the external parameters in the memory (Page 10 §4.4 Extrinsic Calibration Result, line 1, This section describes the 3D registration results after multi-view extrinsic calibration, lines 4-6, Figure 13b–d is the 3D registration results using the camera transformation matrix by extrinsic calibration at frames 15, 21, and 30, respectively. As the registration is using multiple different transformation matrices and is performed after calibration is complete, one of ordinary skill in the art would understand that the parameters must be saved in memory at some step).
Regarding claim 6, Park in view of Li discloses the solid-state imaging device according to claim 1. Park further discloses wherein the processor inputs the first depth information into a trained model to generate the first non-rigid model (Page 7 §4.2 titled 3D Pose Estimation Result, lines 4-6, Figure 9 is the estimation result using the deep learning solution provided by MediaPipe. Both methods can estimate 3D pose using depth in common. The MediaPipe deep learning solution is considered as a “trained model”).
Regarding claim 7, Park in view of Li discloses the solid-state imaging device according to claim 1.
Park fails to disclose an information processing system comprising a plurality of solid-state imaging devices and wherein at least one solid-state imaging device of the plurality of solid-state imaging device acquires external parameters regarding at least one other solid-state imaging device of the plurality of solid-state imaging device.
However, Li discloses an information processing system (¶130, lines 1-11, the first photographing apparatus receives the intrinsic parameters of the one or more second photographing apparatuses respectively transmitted by the one or more second photographing apparatuses based on the depth information detection instruction, and the intrinsic parameters include the focal length, image center, distortion parameters and the like of the camera; based on the intrinsic parameters, the extrinsic parameters between the first photographing apparatus and the one or more second photographing apparatuses are determined. Determining the extrinsic parameters from intrinsic parameters being considered information processing) comprising a plurality of solid-state imaging devices (¶131, lines 1-2, the plurality of photographing apparatuses) and wherein at least one solid-state imaging device of the plurality of solid-state imaging device acquires external parameters regarding at least one other solid-state imaging device of the plurality of solid-state imaging device (¶130, lines 14-18, The first photographing apparatus transmits the extrinsic parameters between the first photographing apparatus and the one or more second photographing apparatuses to the one or more second photographing apparatuses, respectively).
Both Park and Li are analogous to the claimed invention because they are both in the field of camera calibration. It would have been obvious to a person of ordinary skill before the effective filing date of the claimed invention to incorporate the transfer of external parameters between imaging devices of Li into the camera calibration system of Park. The suggestion/motivation for doing so would have been to improve the image quality by taking synchronous images (¶205, lines 7-8, the images synchronously photographed by the terminals are integrated to improve the image quality; ¶130, lines 18-27, the first photographing apparatus receives the synchronization information fed back by the one or more second photographing apparatuses after the one or more second photographing apparatuses are calibrated based on the extrinsic parameters between the first photographing apparatus and the one or more second photographing apparatuses. After the synchronization information is acquired, the first photographing apparatus controls two or more second photographing apparatuses to photograph the scene at the same time).
This method of improving Park was within the ordinary ability of one of ordinary skill in the art based on the teachings of Li.
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date, to modify Park with the teachings of Li to obtain the invention as specified in claim 7.
Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Park in view of Li as applied to claim1 above, and further in view of Kopeinigg et al. (US Patent Application Publication No. 20200312011) (hereafter, "Kopeinigg").
Regarding claim 2, Park in view of Li discloses the solid-state imaging device according to claim 1.
Park in view of Li does not disclose wherein the processor acquires the second non-rigid model from the at least one other solid-state imaging device.
Kopeinigg discloses wherein the processor acquires the second non-rigid model from the at least one other solid-state imaging device (¶0031, lines 4-7, For example, as shown in FIG. 2, a 3D reference model 210 generated (e.g., and continuously updated) by system 100 may be transmitted by way of a network 212 to a media player device 214; ¶0032, lines 9-11, In the same or other examples, media player device 214 may be implemented as a general-purpose computing device).
Park, Li, and Kopeinigg are analogous to the claimed invention because they are all in the field of camera calibration. It would have been obvious to a person of ordinary skill before the effective filing date of the claimed invention to incorporate the transfer of 3D models between computing devices of Kopeinigg into solid-state imaging device of Li and the camera calibration system of Park. The suggestion/motivation for doing so would have been the use of a known technique to improve similar devices in the same way (see MPEP §2143(I)(C)). The improvement of the “base” device (the solid-state imaging device disclosed by Park in view of Li) with the method of transferring model data between two computing devices from Kopeinigg would have been within the ability of one of ordinary skill in the art and would have yielded predictable results.
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date, to modify Park and the teachings of Li with the teachings of Kopeinigg to obtain the invention as specified in claim 2.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Luo et al. (US Patent No. 11403781B2) discloses a method of acquiring external parameters of multiple cameras (Col 6, lines 46-51, As another example, extrinsic parameters defining the scene alignment for a set of cameras capturing the same scene from different vantage points may be initialized or corrected based on how well the locations of calibration points align when located based on images captured by different cameras) and the use of machine learning for camera calibration (Figure 6 #610; col 11, lines 15-18, Machine learning model 610 may serve as an input to an operation 612 for calibration preparation processing, which may be included within intra-capture calibration system 300 of system 100).
Zhang et al. (US Patent Application Publication No. 20230103385) discloses a method of calibrating a plurality of cameras (¶0047, lines 1-4, computing an external parameter matrix of the camera according to marker coordinates of a camera coordinate system and marker coordinates of a world coordinate system) and reconstructing a 3D model using data from multiple viewpoints (Claim 1, lines 14-18, unifying point clouds under the world coordinate system to obtain a plurality of point clouds under different viewing angles; and stitching the plurality of point clouds together to obtain a 3D reconstructed image).
Wen et al. (PCT Application Publication International Publication No. WO 2018129104 A1) discloses extrinsic camera calibration using surf feature points (Figure 16; ¶0080, lines 5-6, after the camera is chosen, it will extract surf feature points and do matching between cameras).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to XIAOMAO DING whose telephone number is (571)272-7237. The examiner can normally be reached Mon-Fri 8:00-4:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Henok Shiferaw can be reached at (571) 272-4637. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/X.D./Examiner, Art Unit 2676
/Henok Shiferaw/Supervisory Patent Examiner, Art Unit 2676