DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “polarization optical imaging module” in claims 1 and 4, “polarization image processing module” in claims 1 and 2, “target scene image enhancement module” in claims 1 and 3, “target scene interpretation module” in claim 1, “pre-processing sub-module” in claim 1, “transformer sub-module” in claims 1 and 5, “prediction output sub-module” in claims 1 and 5, “noise reduction module” in claims 1 and 5.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-8 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Interpretation of claim 1:
Step 1:
Claim 1 is directed to a system configured to perform polarization imaging on a target scene to obtain a polarization image (machine).
Step 2A Prong One:
Regarding claim 1, the following limitations are abstract ideas that fall in the mathematical concept grouping of abstract ideas:
“a polarization image processing module, configured to perform calculation on the polarization image to obtain polarization information of the target scene”
Step 2A Prong Two:
Regarding claim 1, there are no additional elements recited in the claim that integrate the abstract ideas into a practical application. Specifically, the examiner finds that the following additional elements merely adds insignificant extra-solution activity to the abstract idea:
“a target scene image enhancement module, configured to generate image information to be restored of the target scene according to the polarization information of the target scene”
“and a target scene interpretation module, configured to obtain interpretation information of the target scene based on the image information to be restored of the target scene, spectral information, or intensity information through neural networks”
“the pre-processing sub-module is configured to take a combination of an intensity image, a polarization degree, a polarization phase angle, a restored target scene image, and the spectral information as an input or a partial combination of the intensity image, the polarization degree, the polarization phase angle, the restored target scene image, the spectral information as the input and perform noise reduction and feature fusion on the polarization image in combination with a noise reduction module and a convolutional neural network (CNN)”
“the transformer sub-module is configured to extract related data of a target object through a Transformer model”
“and the prediction output sub-module is configured to process data output by the transformer sub-module through a feedforward neural network (FNN) to obtain a multi-target detection result”
Further, the examiner finds that each of the following additional elements do no more than generally link the use of the abstract idea to a particular technological environment or field of use because they are merely an incidental or token addition to the claim that does not alter or affect how the process step of imaging to obtain polarization information of the target scene is performed:
“a polarization optical imaging module, configured to perform polarization imaging on a target scene to obtain a polarization image”
“wherein the target scene image enhancement module adopts a multi-dimensional target detection neural network based on Detection Transformer (DT), the multi-dimensional target detection neural network comprises a pre-processing sub-module, a transformer sub-module, and a prediction output sub-module”
Thus, taken alone, the additional elements do not integrate the abstract idea into a practical application. Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. For example, there is no indication that the combination of elements improves the functioning of a computer or improves any other technology.
Step 2B:
The examiner finds that the following additional elements are Well-Understood, Routine, Conventional Activity because as explained in MPEP 2106.05(d), and amount to receiving data which the courts have found is Well-Understood, Routine, Conventional Activity:
“a target scene image enhancement module, configured to generate image information to be restored of the target scene according to the polarization information of the target scene”
“and a target scene interpretation module, configured to obtain interpretation information of the target scene based on the image information to be restored of the target scene, spectral information, or intensity information through neural networks”
“the pre-processing sub-module is configured to take a combination of an intensity image, a polarization degree, a polarization phase angle, a restored target scene image, and the spectral information as an input or a partial combination of the intensity image, the polarization degree, the polarization phase angle, the restored target scene image, the spectral information as the input and perform noise reduction and feature fusion on the polarization image in combination with a noise reduction module and a convolutional neural network (CNN)”
“the transformer sub-module is configured to extract related data of a target object through a Transformer model”
“and the prediction output sub-module is configured to process data output by the transformer sub-module through a feedforward neural network (FNN) to obtain a multi-target detection result”
Discussion of Dependent Claims:
Claims 2-8 do not recite any additional elements that integrate the abstract ideas into a practical application or amount to significantly more than the abstract ideas.
Therefore, for the reasons outlined above, claims 1-8 are not patent eligible.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-2 and 5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Guo et al. (CN 115147506 A) in view of Qi et al. (CN 114693577 A).
Regarding claim 1, Guo discloses a polarization intelligent sensing system (Fig 1), comprising: a polarization optical imaging module, configured to perform polarization imaging on a target scene to obtain a polarization image (Abstract); a polarization image processing module, configured to perform calculation on the polarization image to obtain polarization information of the target scene (Abstract); a target scene image enhancement module, configured to generate image information to be restored of the target scene according to the polarization information of the target scene (Abstract); and a target scene interpretation module, configured to obtain interpretation information of the target scene based on the image information to be restored of the target scene, spectral information, or intensity information through neural networks (Abtract).
Guo discloses a polarization image reconstruction method based on neural network models but does not explicitly disclose wherein the target scene image enhancement module adopts a multi-dimensional target detection neural network based on Detection Transformer (DT), the multi-dimensional target detection neural network comprises a pre-processing sub-module, a transformer sub-module, and a prediction output sub-module; the pre-processing sub-module is configured to take a combination of an intensity image, a polarization degree, a polarization phase angle, a restored target scene image, and the spectral information as an input or a partial combination of the intensity image, the polarization degree, the polarization phase angle, the restored target scene image, the spectral information as the input and perform noise reduction and feature fusion on the polarization image in combination with a noise reduction module and a convolutional neural network (CNN); the transformer sub-module is configured to extract related data of a target object through a Transformer model; and the prediction output sub-module is configured to process data output by the transformer sub-module through a feedforward neural network (FNN) to obtain a multi-target detection result.
However, Qi, in the same field of endeavor of polarization imaging systems and methods, discloses a polarization image fusion method wherein a target scene image enhancement module adopts a multi-dimensional target detection neural network based on Detection Transformer (DT) (See Fig. 1 and 3), the multi-dimensional target detection neural network comprises a pre-processing sub-module, a transformer sub-module, and a prediction output sub-module; the pre-processing sub-module is configured to take a combination of an intensity image, a polarization degree, a polarization phase angle, a restored target scene image, and the spectral information as an input or a partial combination of the intensity image, the polarization degree, the polarization phase angle, the restored target scene image, the spectral information as the input and perform noise reduction and feature fusion on the polarization image in combination with a noise reduction module and a convolutional neural network (CNN) (Abstract – step 1; Pg. 3, line 29 - 19); the transformer sub-module is configured to extract related data of a target object through a Transformer model (Fig. 1 and 3; Abstract – steps 2 and 3; Pg. 4, line 19 – Pg. 8, line 17); and the prediction output sub-module is configured to process data output by the transformer sub-module through a feedforward neural network (FNN) to obtain a multi-target detection result (Abstract – step 4; Pg. 8, lines 18-29).
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Guo with the methods of Qi providing a machine learning method which improves training performance and precision in detecting polarization information in a polarization image (Qi: Pg. 8, line 24 – Pg. 9, line 10).
Regarding claim 2, Guo in view of Qi discloses the polarization intelligent sensing system according to claim 1, as outlined above, and further discloses wherein the polarization image processing module comprises a polarization image preprocessing unit and a polarization information calculation unit (Guo: Pg 4, lines 10-18); the polarization image preprocessing unit is configured to preprocess the polarization image to obtain a preprocessed polarization image, and the polarization information calculation unit is configured to perform calculation on the preprocessed polarization image to obtain polarization degree information and polarization phase angle information of pixels (Guo: Abstract; Pg. 3, line 24 – Page 4, line 32; Pg. 6, lines 6-8).
Regarding claim 5, Guo in view of Qi discloses a polarization intelligent sensing method, applied to the polarization intelligent sensing system according to claim 1, as outlined above, and further discloses a step S1: performing the polarization imaging on the target scene to obtain the polarization image (Guo: abstract); a step S2: performing the calculation on the polarization image to obtain the polarization information of the target scene (Guo: abstract); a step S3: generating the image information to be restored of the target scene according to the polarization information of the target scene (Guo: abstract), and constructing the multi-dimensional target detection neural network based on the DETR (Qi: See Fig. 1 and 3); and a step S4: obtaining the interpretation information of the target scene based on the image information to be restored of the target scene, the spectral information, or the intensity information through the neural networks; wherein the multi-dimensional target detection neural network comprises the preprocessing sub-module, the transformer sub-module, and the prediction output sub-module; the preprocessing sub-module is configured to take the combination of the intensity image, the polarization degree, the polarization phase angle, the restored target scene image, and the spectral information as the input or the partial combination of the intensity image, the polarization degree, the polarization phase angle, the restored target scene image, the spectral information as the input and perform the noise reduction and the feature fusion on the polarization image in combination with the noise reduction module and the CNN (Qi: Abstract – step 1; Pg. 3, line 29 - 19); the transformer sub-module is configured to extract related data of the target object through the Transformer model (Qi: Fig. 1 and 3; Abstract – steps 2 and 3; Pg. 4, line 19 – Pg. 8, line 17); and the prediction output sub-module is configured to process the data output by the transformer sub-module through the FNN to obtain the multi-target detection result (Qi: Abstract – step 4; Pg. 8, lines 18-29).
Claim(s) 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Guo et al. (CN 115147506 A) in view of Qi et al. (CN 114693577 A) further in view of Wang et al. (CN 115561916 A).
Regarding claim 4, Guo in view of Qi discloses the polarization intelligent sensing system according to claim 1, as outlined above, but does not explicitly disclose wherein the polarization optical imaging module comprises a polarization optical lens and a polarization detector, the polarization optical lens and the polarization detector are matched with each other, and the polarization optical lens has a weak polarization modulation characteristic.
However, Wang, in the same field of endeavor of polarization imaging devices and methods, discloses wherein a polarization optical imaging module comprises a polarization optical lens (telescope objective lens, LD, LR, relay projection objective lens) and a polarization detector (DMD), the polarization optical lens and the polarization detector are matched with each other, and the polarization optical lens has a weak polarization modulation characteristic (Fig. 1 and 3; Abstract; Pg. 3, lines 8-13).
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Guo in view of Qi with a polarization imaging system which is able to compensate for large polarization aberration effects, improving the quality and signal-to-noise of the polarization imaging device.
Claim(s) 6 and 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Guo et al. (CN 115147506 A) in view of Qi et al. (CN 114693577 A) further in view of Liu et al. (CN 110570364 A).
Regarding claim 6, Guo in view of Qi discloses the polarization intelligent sensing method according to claim 5, as outlined above, and further discloses wherein the step S2 comprises: a step S21: preprocessing the polarization image to obtain a preprocessed polarization image (Guo: Pg 4, lines 10-18); a step S22: obtaining Stokes vectors of the target scene from the preprocessed polarization image (Guo: Abstract; Pg. 3, line 24 – Page 4, line 32; Pg. 6, lines 6-8); and a step S23: calculating to obtain the polarization information of the target scene based on the Stokes vectors; wherein the polarization information comprises polarization degree information and polarization phase angle information; a calculation formula of the polarization degree information DoP is as follows: DoP=Q2+U2+V2/I; or a calculation formula of linear polarization degree information AoLP is as follows: AoLP=Q2+U2/I; wherein I, the Q, the U, the V respectively denote the Stokers vectors of the target scene (Guo: Abstract; Pg. 3, line 24 – Page 4, line 32; Pg. 6, lines 6-8).
Guo in view of Qi does not explicitly disclose that a calculation formula of the polarization phase angle information AoLP is as follows: AoLP=180/π arctan(U/Q).
However, Liu, in the same field of endeavor of polarization imaging systems and methods, discloses a calculation formula of the polarization phase angle information AoLP is as follows: AoLP=180/π arctan(U/Q) (See paragraphs [0019]-[0020] of the original document).
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Guo in view of Qi with additional polarization information, including polarization phase angle information, increasing the functionality of the imaging system.
Regarding claim 7, Guo in view of Qi and Liu discloses the polarization intelligent sensing method according to claim 6, as outlined above, and further discloses a step of defining adjacent four pixels as a super pixel I=[I0, I45, I90, I135], denoting detection intensity as Id, denoting a real target intensity as It, calibrating the detection intensity through a polynomial fitting method to perform image denoising on the polarization image; wherein the polynomial fitting method satisfies following formula: It=a0+a1Id+a1Id2+anIdn; wherein a0 … an are coefficients of a polynomial (Guo: Abstract; Pg. 3, line 24 – Page 4, line 32; Pg. 6, lines 6-8).
Allowable Subject Matter
Claims 3 and 8 are objected to as being dependent upon a rejected base claim and are further rejected under 35 USC 101, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims and if rewritten to overcome the rejection under 35 USC 101.
Regarding claim 3, the prior art, alone or in combination, fails to disclose or render obvious the polarization intelligent sensing system according to claim 1, wherein the target scene image enhancement module adopts a polarization image enhancement model IMG(O)= k1*C1{I0, I45, I90, I135}+k2*C2{IMG(D)}+k3*C3{IMG(A)}; wherein the IMG(O) denotes the image information to be restored of the target scene, the C1{I0, I45, I90, I135} denotes a preferred image calculated according to a first polarization image, a second polarization image, a third polarization image, and a fourth polarization image respectively having a linear polarization angle of 0 ⷪ, 45 ⷪ, 90 ⷪ, 135 ⷪ the preferred image is a single polarization angle image or a multi-angle calculated image; the IMG(D) denotes a polarization degree image, and the C2{IMG(D)} denotes a first calculation image containing edge highlight information of a target space obtained through the polarization degree image; the IMG(A) denotes a polarization phase angle image, and the C3{IMG(A)} denotes a second calculation image containing surface information of a target object obtained through the polarization phase angle image; the k1, the k2, and the k3 respectively denote an intensity coefficient of a calculation preferred image of the polarization image, an intensity coefficient of a polarization degree calculation image of the polarization image, and an intensity coefficient of a polarization phase angle calculation image of the polarization image.
With regard to the above claim, Guo et al. (CN 115147506 A), Qi et al. (CN 114693577 A), Liu et al. (CN 110570364 A) and Wang et al. (CN 115561916 A) all relate to polarization imaging systems and methods utilizing neural network architectures for extracting polarization information from polarization images. Guo describes a general method which captures multi-angled polarization images, image denoising and fitting using a deep neural network model to reconstruct blurred images and extract target structure (abstract). Similarly, Qi discloses an infrared polarization image fusion method involving pre-processing, noise reduction steps and a transformer neural network model designed to extract polarization characteristics of the target and improve network performance over conventional neural networks (abstract). Liu focuses on a denoising method utilizing deep neural networks which allows for the fitting or optimization of a loss function in order to produce a de-noised image, improving polarization image quality (abstract). Lastly, Wang discloses a polarization imaging system including polarization lenses and a polarization detector configured to compensate for polarization aberration, allowing for improved image quality while utilizing a low cost, motionless polarization imaging device (abstract). All references fail to explicitly disclose the polarization enhancement model emphasized in bold wording above.
Regarding claim 8, the prior art, alone or in combination, fails to disclose or render obvious the polarization intelligent sensing method according to claim 5, wherein in the step S3, constructing a polarization image enhancement model IMG(O)=k1*C1{I0, I45, I90, I135}+k2*C2{IMG(D)}+k3*C3{IMG(A)}; wherein the IMG(O) denotes the image information to be restored of the target scene, the C1{I0, I45, I90, I135} denotes a preferred image calculated according to a first polarization image, a second polarization image, a third polarization image, and a fourth polarization image respectively having a linear polarization angle of 0 ⷪ, 45 ⷪ, 90 ⷪ, 135 ⷪ, the preferred image is a single polarization angle image or a multi-angle calculated image; the IMG(D) denotes a polarization degree image, and the C2{IMG(D)} denotes a first calculation image containing edge highlight information of a target space obtained through the polarization degree image; the IMG(A) denotes a polarization phase angle image, and the C3{IMG(A)} denotes a second calculation image containing surface information of a target object obtained through the polarization phase angle image; the k1, the k2, and the k3 respectively denote an intensity coefficient of a calculation preferred image of the polarization image, an intensity coefficient of a polarization degree calculation image of the polarization image, and an intensity coefficient of a polarization phase angle calculation image of the polarization image.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MAHER YAZBACK whose telephone number is (703)756-1456. The examiner can normally be reached Monday - Friday 8:30 am - 5:30 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michelle Iacoletti can be reached at (571)270-5789. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MAHER YAZBACK/
Examiner, Art Unit 2877 /MICHELLE M IACOLETTI/Supervisory Patent Examiner, Art Unit 2877