DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This office action is in response to the amendment/Applicant’s response filed on 29 January 2026.
This office action is made Final.
Claims 1, 3, 4, and 12-13 have been amended.
Claims 11 have been cancelled.
Claim 14 has been added.
The objection to the specification/abstract and the claims, the 112 rejection of claims 3-4, and 103 rejection of Claim 4, as presented in the previous office action, has been withdrawn as neccessited by Applicant’s amendment
Claims 1-4, 7-9, 12-14 are pending. Claims 1, 12, and 14 are independent claims.
Specification
The amendment to the abstract filed on 1/29/26 has been accepted and entered.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are:
Control unit …configured to transform in claim 14.
Control unit to provide in claim 14
Control unit (to) compare/comparing in claim 14,
Control unit (to) calculate in claim 14
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 14 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
As independent claim 14, all of the claim limitations containing “a control unit configured to” or “control unit to/(to)” are limitations that invoke 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. However, the written description fails to disclose the corresponding structure, material, or acts for the claimed function of each limitation "...unit configured to". Also, no clear algorithm is shown in the specification to correspond to each of the claimed unit(s)/means. This is required as described in MPEP 2181 II.B.
Applicant may:
(a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph; or
(b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the claimed function, without introducing any new matter (35 U.S.C. 132(a)). If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function applicant should clarify the record by either:
(a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181.
Any claim not specifically addressed, above, is being rejected as its failure to overcome the incorporated deficiencies of a claim upon which is depends on.
Claim 14 recites the limitation/element “a luminance map” in lines 16-17. However, Claim 14 already introduced the elements/term(s) “luminance map” in line(s) 4. Therefore, it is unclear to the Examiner if the elements/term(s) “luminance map” of lines 16-17 should depend on “luminance map” of line(s) 4 or viewed as its own element. Therefore, the claim is vague and indefinite. For examining purposes, the Examiner will view this portion of limitation as “…previously configured to transform the image data into the luminance map in a first training process of the control unit by a means of …”
Claim 14 recites the limitation/element “a luminance map with a training dataset of image data” in line(s) 18. However, Claim 14 already introduced the elements/term(s) “luminance map with a training dataset of image data” in line(s) 7. Therefore, it is unclear to the Examiner if the elements/term(s) “luminance map with a training dataset of image data” of line(s) 18 should depend on “luminance map with a training dataset of image data” of line(s) 7 or viewed as its own element. Therefore, the claim is vague and indefinite. For examining purposes, the Examiner will view this portion of limitation as “…implemented by a training operation that provides the luminance map with the training dataset of the image data …”
Claim 14 recites the limitation/element “a measured luminance map” in line(s) 20. However, Claim 14 already introduced the elements/term(s) “a measured luminance map” in line(s) 8. Therefore, it is unclear to the Examiner if the elements/term(s) “measured luminance map” of line(s) 20 should depend on “measured luminance map” of line(s) 8 or viewed as its own element. Therefore, the claim is vague and indefinite. For examining purposes, the Examiner will view this portion of limitation as “…comparing the luminance map carried out by the control unit with the measured luminance map …”
Claim 14 recites the limitation/element “a training dataset of luminance map” in line(s) 24-25. However, Claim 14 already introduced the elements/term(s) “training dataset of luminance map” in line(s) 12-13. Therefore, it is unclear to the Examiner if the elements/term(s) “training dataset of luminance map” of line(s) 24-25 should depend on “training dataset of luminance map” of line(s) 12-13 or viewed as its own element. Therefore, the claim is vague and indefinite. For examining purposes, the Examiner will view this portion of limitation as “…training the control unit to calculate the adapted light pattern in response to the training dataset of the luminance map …”
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-3, 9 remain and 14 is rejected under 35 U.S.C. 103 as being unpatentable over Shibata et al (US20200139879, pub. 5/7/2020) in further view of Sunkavalli et al (US20190164261, 2019) in further view of Fdhal et al (US20130113383, 2013) in further view of Takai et al (US20210162927, 2013)
As per independent claim 1, Shibata et al discloses a method for operating an automotive lighting device with a matrix arrangement of light pixels, (FIG 1, 2A; 0044: The optical deflection device 26 includes a micro mirror array 32 in which multiple micro mirror elements 30 are arranged in a matrix) the method comprising the steps of:
acquiring image data of a working zone in front of the automotive lighting device; (0035, 0050: imager of the vehicular lamp of the vehicle captures images of an area in front of the vehicle)
transforming the image data into a luminance map; (0051-0052: image data is transmitted to luminance analyzer to create a spatial distribution of light data. Luminance of each region is identified where the collection of region is a form of a map)
wherein transforming the image data into a luminance map by is implement by a training operation that provides a luminance map with a training dataset of image data; (0046,0056,0070 detects the luminance of each individual region R is detected every 0.1 to 5 ms, and the luminance of the individual region R is associated with pedestrians, etc along with defining luminance range L 1 - L 3. Furthermore, Shibata discloses provide a luminance map with a training dataset of image data through the detection of luminance value of each individual region R such as 0052-0054))
providing a desired light pattern (0053: The lamp controller 18 sets/provides an illuminance value of light emitted to each individual region R)
calculating an adapted light pattern which provides the desired light pattern when projected over the luminance map; (FIG 5A; 0053-0057, 0082, 0074, 0218; Claim 1:; Based on the detection result from the luminance analyzer 14, the illuminance setting unit 42 determines the illuminance value of light emitted to each individual region R.; the illuminance setting unit 42 produces a predetermined light distribution pattern based on the relationships between the detected luminance value and the set illuminance value shown)
wherein the calculating the adapted light pattern is carried out by (0053: illuminance value provided/carried/calculated by the illuminance setting unit 42 of control device 50 (FIG 1)): a training operation to compute an adapted light pattern in response to a training dataset of luminance map and desired light pattern (FIG 5A; 0053-0057, 0082, 0074, 0218; Claim 1: set illuminance value for each luminance range L1-L3; 0027: the light source unit 10 changes the light distribution pattern every 0.1 to 5 ms, so the detected luminance value of each individual region R measured after the change can be said to be a light pattern).
projecting the adapted light pattern (0047-0049, 0065 0081: the adapted lighted pattern produced is projected)
While Shibata discloses the use of machine learning algorithms (0068, 0153), Shibata does not disclose the luminance analyzer using a machine learning algorithm to transform the captured image into a spatial distribution of light data. In other words, Shibata does not disclose wherein an operation of transforming the image data includes a use of a machine learning algorithm. However, Sunkavalli et al discloses obtaining an image from a camera device and inputting the obtained image into a neutral network to generating a light intensity map. The light intensity map for the input image indicates the estimated intensity of light emanating from each pixel within a panoramic environment. (0051-0052) Thus, the light intensity map is viewed as a luminance map. In addition, a neutral network is a machine learning algorithm (see 0005)
It would have been obvious to one of ordinary skill in the before the effective filing date to have modified the cited art with the cited disclosed features of Sunkavalli et al since it would have provided the benefit of providing accurate or robust methods for estimating illumination intensity of a panoramic environment for a single image. (0004)
Furthermore, Shibata discloses detecting luminance value of each individual region R; however, the cited art fails to specifically discloses a testing operation comparing the luminance map with measured luminance map. However, Fdhal discloses compare measured luminance map(s) 28 to desired luminance map(s) 34 (0027)
It would have been obvious to one of ordinary skill in the before the effective filing date to have modified the cited art with the cited disclosed features of Fdhal et al since it would have provided the benefit of improve illumination uniformity across viewing area. (0027)
Furthermore, the cited art fails to specifically disclose another testing operation that compares the adapted light pattern with measured light pattern. However, Takai et al disclose compares a light pattern of a captured image with a reference light pattern (0042, 0147, 0152, 0160)
It would have been obvious to one of ordinary skill in the before the effective filing date to have modified the cited art with the cited disclosed features of Takai et al since it would have provided the benefit that allows an occupant to accurately recognize visual communication between a vehicle and an object. (0007)
As per dependent claim 2, Shibata et al discloses the method is performed within a rate of between 0 to 2 seconds. (0047, 0065: Discloses occurring every 0.1-5ms, a rate between 0 to 2 seconds)
As per dependent claim 3, Shibata et al discloses wherein the transforming the image data into a luminance map is carried out by the control unit (0051-0052 discloses a control unit, having a luminance analyzer, which carries out the step of determining the luminance of the image data (transforming the image data into a luminance map))
As per dependent claim 9, Shibata et al discloses the image data is acquired by an infrared camera (0148)
As per independent claim 14, Claim 14 recites similar limitations as in Claim 1 and is rejected under similar rationale. Furthermore, Shibata et al discloses a method for operating an automotive lighting device with a matrix arrangement of light pixels, (FIG 1, 2A; 0044: The optical deflection device 26 includes a micro mirror array 32 in which multiple micro mirror elements 30 are arranged in a matrix) the method comprising the steps of:
acquiring image data of a working zone in front of the automotive lighting device; (0035, 0050: imager of the vehicular lamp of the vehicle captures images of an area in front of the vehicle)
transforming the image data into a luminance map; (0051-0052: image data is transmitted to luminance analyzer to create a spatial distribution of light data. Luminance of each region is identified where the collection of region is a form of a map)
wherein transforming the image data into a luminance map by is implement by a training operation that provides a luminance map with a training dataset of image data; (0046,0056,0070 detects the luminance of each individual region R is detected every 0.1 to 5 ms, and the luminance of the individual region R is associated with pedestrians, etc along with defining luminance range L 1 - L 3. Furthermore, Shibata discloses provide a luminance map with a training dataset of image data through the detection of luminance value of each individual region R such as 0052-0054))
providing a desired light pattern (0053: The lamp controller 18 sets/provides an illuminance value of light emitted to each individual region R)
calculating an adapted light pattern which provides the desired light pattern when projected over the luminance map; (FIG 5A; 0053-0057, 0082, 0074, 0218; Claim 1:; Based on the detection result from the luminance analyzer 14, the illuminance setting unit 42 determines the illuminance value of light emitted to each individual region R.; the illuminance setting unit 42 produces a predetermined light distribution pattern based on the relationships between the detected luminance value and the set illuminance value shown)
wherein the calculating the adapted light pattern is carried out by (0053: illuminance value provided/carried/calculated by the illuminance setting unit 42 of control device 50 (FIG 1)): a training operation to calculate an adapted light pattern in response to a training dataset of luminance map and desired light pattern (FIG 5A; 0053-0057, 0082, 0074, 0218; Claim 1: set illuminance value for each luminance range L1-L3; 0027: the light source unit 10 changes the light distribution pattern every 0.1 to 5 ms, so the detected luminance value of each individual region R measured after the change can be said to be a light pattern).
projecting the adapted light pattern (0047-0049, 0065 0081: the adapted lighted pattern produced is projected)
While Shibata discloses the use of machine learning algorithms (0068, 0153), Shibata does not disclose the luminance analyzer using a machine learning algorithm to transform the captured image into a spatial distribution of light data. In other words, Shibata does not disclose wherein an operation of transforming the image data includes a use of a machine learning algorithm. However, Sunkavalli et al discloses obtaining an image from a camera device and inputting the obtained image into a neutral network to generating a light intensity map. The light intensity map for the input image indicates the estimated intensity of light emanating from each pixel within a panoramic environment. (0051-0052) Thus, the light intensity map is viewed as a luminance map. In addition, a neutral network is a machine learning algorithm (see 0005)
It would have been obvious to one of ordinary skill in the before the effective filing date to have modified the cited art with the cited disclosed features of Sunkavalli et al since it would have provided the benefit of providing accurate or robust methods for estimating illumination intensity of a panoramic environment for a single image. (0004)
In addition, Shibata et al discloses wherein the transforming the image data into a luminance map is carried out by a control unit (0051-0052 discloses a control unit, having a luminance analyzer, which determines the luminance of the image data;) which is previously configured to transform the image data into a luminance map by in a first training process of the control unit by a means of: training the control unit to provide a luminance map with a training dataset of image data; (see 0046,0056,0070 as explain above) As explained above, Shibata discloses provide a luminance map with a training dataset of image data through the detection of luminance value of each individual region R such as 0052-0054)
Furthermore, as explained above, Shibata discloses detecting luminance value of each individual region R; however, the cited art fails to specifically discloses a testing operation comparing the luminance map with measured luminance map/ testing the control unit comparing the luminance map provided by the control unit with measured luminance map. However, Fdhal discloses a processor (control unit) to compare measured luminance map(s) 28 to desired luminance map(s) 34 (0027)
It would have been obvious to one of ordinary skill in the before the effective filing date to have modified the cited art with the cited disclosed features of Fdhal et al since it would have provided the benefit of improve illumination uniformity across viewing area. (0027)
In addition, Shibata et al discloses wherein the calculating the adapted light pattern is carried out by a control unit (FIG 1: illuminance setting unit 42 of control device 50) which is previously configured to calculate the adapted light pattern in a second training process of the control unit by a means of (0053: illuminance value provided by the illuminance setting unit 42): training the control unit to calculate an adapted light pattern in response to a training dataset of luminance map and the desired light pattern (see FIG 5A; 0053-0057, 0082, 0074, 0218; Claim 1 as explained above).
Furthermore, the cited art fails to specifically disclose another testing operation that compares the adapted light pattern with measured light pattern/control unit comparing the adapted light pattern that is calculated with measured light pattern . However, Takai et al disclose compares a light pattern of a captured image with a reference light pattern (0042, 0147, 0152, 0160) 0152 discloses a control unit doing the comparing.
It would have been obvious to one of ordinary skill in the before the effective filing date to have modified the cited art with the cited disclosed features of Takai et al since it would have provided the benefit that allows an occupant to accurately recognize visual communication between a vehicle and an object. (0007)
Claim(s) 4 is rejected under 35 U.S.C. 103 as being unpatentable over Shibata et al in further view of Sunkavalli et al in further view of Fdhal et al further view of Takai et al in further view of Ando (US20210289604, EFD 2016)
As per dependent claim 4, while Shibata discloses the use of machine learning algorithms (0068, 0153), the cited art does not disclose illuminance setting unit using a machine learning algorithm to calculate an adapted light pattern. In other words, the cited fails to specifically disclose the training operation that computes the adapted light pattern includes use of the machine learning algorithm. However, Ando discloses the computing of the adapted light pattern includes use of the machine learning algorithm (0013: calculating, using a neural network, illumination pattern information for generating an illumination pattern)
It would have been obvious to one of ordinary skill in the before the effective filing date to have modified the cited art with the cited disclosed features of Ando since it would have provided the benefit of calculating an optimal illumination pattern even when the illumination target changes in a complex manner. (0006)
Claim 7 remains rejected under 35 U.S.C. 103 as being unpatentable over Shibata et al in further view of Sunkavalli et al in further view of Fdhal et al in further view of Takai et al in further view of Darrer et al (US20210041539, EFD 8/7/2019)
As per dependent claim 7, Shibata et al discloses identifying the position of other objects in the working zone (FIG 3; 0121-0122, 0151) However, the cited art fails to disclose wherein the luminance map isolates the position of the road from the position of other objects in the working zone. However, Darrier et al discloses a luminance image (form of a map) that distinguish the different luminance intensities levels. These different levels indicate the luminance refer to either the headlights of oncoming vehicles or markings of the road. Furthermore, the image indicates which lane the vehicle is traveling and which lane other vehicles are traveling. In particular, the image shows the center dotted line to the left and the solid line to right of the lane the vehicle is traveling. The images show the other lane which the other traveling vehicles are in; solid white lane to the far left and center dotted lane to right of the (FIG. 3, 0046) Thus, Darrier shows the isolation/distinguishing of the isolating the position of the road from the position of other objects in the working zone
It would have been obvious to one of ordinary skill in the before the effective filing date to have modified the cited art with the cited disclosed features of Darrier et al since it would have provided the intrinsic advantage of a simple and efficient method of identifying various objects on the road based on their luminance.
Claim 8 remains rejected under 35 U.S.C. 103 as being unpatentable over Shibata et al in further view of Sunkavalli et al in further view of Fdhal et al in further view of Takai et al in further view of Wood (US20150007217, 2015)
As per dependent claim 8, the cited art fails to specifically disclose the image data includes RGB data. However, Wood discloses the image data includes RGB data (0039)
It would have been obvious to one of ordinary skill in the before the effective filing date to have modified the cited art with the cited disclosed features of Wood et al since the RGB image data has color information that allows the intrinsic advantage of a clearer and more realistic view of the scene
Claim(s) 12-13 remain rejected under 35 U.S.C. 103 as being unpatentable over Shibata et al (US20200139879, pub. 5/7/2020) in further view of Sunkavalli et al (US20190164261, 2019) in further view of Fdhal et al (US20130113383, 2013)
As per independent claim 12, Shibata et al discloses a system comprising:
matrix arrangement of solid-state light sources (0043-0044, 0047: multiple micro mirrors arranged in an array (a matrix) for reflecting light from a light source);
a camera configured to acquire image data (0035, 0050)
automotive lighting device that stores a computer program and that instructs the automotive lighting device to perform a number of functions, (FIG 1, 2A; 0044: The optical deflection device 26 includes a micro mirror array 32 in which multiple micro mirror elements 30 are arranged in a matrix) performing the operations of:
acquire an image data of a working zone from the camera; (0035, 0050: imager (camera) of the vehicular lamp of the vehicle captures images of an area in front of the vehicle)
transforming the image data into a luminance map; (0051-0052: image data is transmitted to luminance analyzer to create a spatial distribution of light data. Luminance of each region is identified where the collection of region is a form of a map)
…wherein transforming the image data into a luminance map is implemented by a training operation that provides a luminance map with a training dataset of image data; (0046,0056,0070 detects the luminance of each individual region R is detected every 0.1 to 5 ms, and the luminance of the individual region R is associated with pedestrians, etc along with defining luminance range L 1 - L 3. Furthermore, Shibata discloses provide a luminance map with a training dataset of image data through the detection of luminance value of each individual region R such as 0052-0054))
providing a desired light pattern (0053: The lamp controller 18 sets/provides an illuminance value of light emitted to each individual region R)
calculate an adapted light pattern which provides the desired light pattern when projected over the luminance map; (FIG 5A; 0053-0057, 0082, 0074, 0218; Claim 1:; Based on the detection result from the luminance analyzer 14, the illuminance setting unit 42 determines the illuminance value of light emitted to each individual region R.; the illuminance setting unit 42 produces a predetermined light distribution pattern based on the relationships between the detected luminance value and the set illuminance value shown)
projecting the adapted light pattern with the matrix arrangement of solid-state light sources (FIG 2, 4; 0043-0044, 0047-0049, 0065 0081: the adapted lighted pattern produced is projected)
While Shibata discloses the use of machine learning algorithms (0068, 0153), Shibata does not disclose the luminance analyzer using a machine learning algorithm to transform the captured image into a spatial distribution of light data. In other words, Shibata does not disclose wherein an operation of transforming the image data includes a use of a machine learning algorithm. However, Sunkavalli et al discloses obtaining an image from a camera device and inputting the obtained image into a neutral network to generating a light intensity map. The light intensity map for the input image indicates the estimated intensity of light emanating from each pixel within a panoramic environment. (0051-0052) Thus, the light intensity map is viewed as a luminance map. In addition, a neutral network is a machine learning algorithm (see 0005)
It would have been obvious to one of ordinary skill in the before the effective filing date to have modified the cited art with the cited disclosed features of Sunkavalli et al since it would have provided the benefit of providing accurate or robust methods for estimating illumination intensity of a panoramic environment for a single image. (0004)
Furthermore, Shibata discloses detecting luminance value of each individual region R; however, the cited art fails to specifically discloses a testing operation comparing the luminance map with measured luminance map. However, Fdhal discloses compare measured luminance map(s) 28 to desired luminance map(s) 34 (0027)
It would have been obvious to one of ordinary skill in the before the effective filing date to have modified the cited art with the cited disclosed features of Fdhal et al since it would have provided the benefit of improve illumination uniformity across viewing area. (0027)
As per dependent claim 13, wherein the matrix arrangement includes at least 2000 solid-state light sources (FIG 2A; 0047)
Response to Arguments
Applicant's arguments filed 1/29/26 have been fully considered but they are not persuasive.
A) On page 8, in regards to the 103 rejection of Claim 1, Applicant argues the cited art does not teach the subject matter/limitations “a training operation that computes the adapted light pattern in response to a training dataset of the luminance map and the desired light pattern; another testing operation that compares the adapted light pattern with a measured light pattern”. Applicant argues that the office action does not address these different operations and that the rejection appears to assume the same training operation that places all the presented references in peril because it remains unclear where the multiple training operations can qualify Applicant's particular training operation to the appropriate corresponding limitation. Thus, in summary, the Applicant argues that the cited does not address each of the limitations. However, the Examiner disagrees.
Based on the arguments provided by the Applicant in respect to claimed features in the claim limitation, the Examiner respectfully submits that the Applicant states that each of the cited art (Shibata, Sunkavalli, Fdha,l or Takai) do not teach the limitations by merely stating the cited art doesn’t teach the limitations; therefore, merely concludes that each of the reference do not teach the limitation without any explanation or reasoning how the figures do not teach the claimed matter. Applicant does not disclose how the claim language of the claim limitation is different from the teachings of each of the reference by describing the differences that involve any supporting evidence from the specification stating or describing the limitation, or how each of the cited art is specifically different from Applicant's invention. Thus, Applicant’s arguments fail to disclose how the cited art is silent or doesn't teach on the limitation since the Applicant does not fully describe the differences that involve any supporting evidence from Applicant 's specification stating or describing the limitations, or how the cited art is specifically different from the invention itself. Therefore, the Applicant did not explicitly state how Applicant's invention, other than stating each reference, alone, doesn't teach the limitations, is different to prove that the cited art’s functionality does not equivalently teach the limitation.
The Examiner respectfully discloses that the previous office action provided a detailed explanation with reasons on why the combination cited art, Shibata and Takai, taught the argued limitation(s) and/or subject matter. It was noted that the Examiner explained how Shibata taught the argued subject matter of a training operation that computes the adapted light pattern in response to a training dataset of the luminance map and the desired light pattern and Takai taught the argued subject matter of another testing operation that compares the adapted light pattern with a measured light pattern. Thus, the Examiner respectfully states it appears that Applicant did not follow the Examiner's complete analysis and explanation of how Shibata and Takai taught these argued limitations of multiple training operations. Thus, based on the broadest reasonable interpretation, Shibata et al disclose:
calculating an adapted light pattern which provides the desired light pattern when projected over the luminance map; (FIG 5A; 0053-0057, 0082, 0074, 0218; Claim 1:; Based on the detection result from the luminance analyzer 14, the illuminance setting unit 42 determines the illuminance value of light emitted to each individual region R.; the illuminance setting unit 42 produces a predetermined light distribution pattern based on the relationships between the detected luminance value and the set illuminance value shown)
wherein the calculating the adapted light pattern is carried out by (0053: illuminance value provided/carried/calculated by the illuminance setting unit 42 of control device 50 (FIG 1)): a training operation to compute an adapted light pattern in response to a training dataset of luminance map and desired light pattern (FIG 5A; 0053-0057, 0082, 0074, 0218; Claim 1: set illuminance value for each luminance range L1-L3; 0027: the light source unit 10 changes the light distribution pattern every 0.1 to 5 ms, so the detected luminance value of each individual region R measured after the change can be said to be a light pattern).
Furthermore, the cited art fails to specifically disclose another testing operation that compares the adapted light pattern with measured light pattern. However, Takai et al disclose compares a light pattern of a captured image with a reference light pattern (0042, 0147, 0152, 0160)
It would have been obvious to one of ordinary skill in the before the effective filing date to have modified the cited art with the cited disclosed features of Takai et al since it would have provided the benefit that allows an occupant to accurately recognize visual communication between a vehicle and an object. (0007)
Thus, Examiner’s rejection clearly addresses the multiple training operations claimed in claim 1 and the 103 rejection remains for this reason.
B) On page 8, in regards to claim 1, Applicant further states/argues: Even if multiple training operations can be found, the Office would be required to state a reasonable motivation of why or how one would connect the rationale for doing so, which appears currently absent by the outstanding rejection. In other words, it appears Applicant is arguing there is no motivation for combining Takai with Shibata used to reject the limitation in Argument A. However, the Examiner disagrees.
In response to applicant’s argument that there is no teaching, suggestion, or motivation to combine the references, the examiner recognizes that obviousness may be established by combining or modifying the teachings of the prior art to produce the claimed invention where there is some teaching, suggestion, or motivation to do so found either in the references themselves or in the knowledge generally available to one of ordinary skill in the art. See In re Fine, 837 F.2d 1071, 5 USPQ2d 1596 (Fed. Cir. 1988), In re Jones, 958 F.2d 347, 21 USPQ2d 1941 (Fed. Cir. 1992), and KSR International Co. v. Teleflex, Inc., 550 U.S. 398, 82 USPQ2d 1385 (2007). In this case, as explicitly stated in the previous office action, the Examiner points the Applicant to paragraph 0007 of Takai that states “An object of the display system present disclosure is to provide a vehicle display system that allows an occupant to accurately recognize visual communication between a vehicle and an object, and a vehicle including the vehicle display system.” Thus, motivation is explicitly found in the Takai reference.
Thus, the Office has provided a reasonable motivation of why or how one would connect the rationale for doing so, which was clearly present within the outstanding rejection
C) On page 9, in regards to the 103 rejection of Claim 12, Applicant traverses the rejection because the combined rejection requires the combined rejection requires (the teaching of ) "transforming the image data into a luminance map is implemented by a training operation that provides the luminance map with a training dataset of the image data AND a testing operation that compares the luminance map with a measured luminance map;" In addition, Applicant argues that the presented rejection renders it unclear whether both the stated limitations are met for qualifying the "transforming the image data into a luminance map" concurrently? In other words, it appears Applicant is arguing that the cited art does not teach these claimed features. However, the Examiner disagrees.
Based on the arguments provided by the Applicant in respect to claimed features in the claim limitation, the Examiner respectfully submits that the Applicant states that each of the cited art (Shibata, Sunkavalli, Fdhal) do not teach the limitations by merely stating the cited art doesn’t teach the limitations; therefore, merely concludes that each of the reference do not teach the limitation without any explanation or reasoning how the figures do not teach the claimed matter. Applicant does not disclose how the claim language of the claim limitation is different from the teachings of each of the reference by describing the differences that involve any supporting evidence from the specification stating or describing the limitation, or how each of the cited art is specifically different from Applicant's invention. Thus, Applicant’s arguments fail to disclose how the cited art is silent or doesn't teach on the limitation since the Applicant does not fully describe the differences that involve any supporting evidence from Applicant 's specification stating or describing the limitations, or how the cited art is specifically different from the invention itself. Therefore, the Applicant did not explicitly state how Applicant's invention, other than stating each reference, alone, doesn't teach the limitations, is different to prove that the cited art’s functionality does not equivalently teach the limitation.
The Examiner respectfully discloses that the previous office action provided a detailed explanation with reasons on why the combination cited art, Shibata and Fdhal, taught the argued limitation(s) and/or subject matter. It was noted that the Examiner explained how Shibata taught the argued subject matter of transforming the image data into a luminance map is implemented by a training operation that provides the luminance map with a training dataset of the image data and Fdhal taught the argued subject matter of a testing operation that compares the luminance map with a measured luminance map. Thus, the Examiner respectfully states it appears that Applicant did not follow the Examiner's complete analysis and explanation of how Shibata and Fdhal taught these argued limitations of multiple training operations. Thus, based on the broadest reasonable interpretation, Shibata et al disclose:
transforming the image data into a luminance map; (0051-0052: image data is transmitted to luminance analyzer to create a spatial distribution of light data. Luminance of each region is identified where the collection of region is a form of a map)
…wherein transforming the image data into a luminance map is implemented by a training operation that provides a luminance map with a training dataset of image data; (0046,0056,0070 detects the luminance of each individual region R is detected every 0.1 to 5 ms, and the luminance of the individual region R is associated with pedestrians, etc along with defining luminance range L 1 - L 3. Furthermore, Shibata discloses provide a luminance map with a training dataset of image data through the detection of luminance value of each individual region R such as 0052-0054))
Furthermore, as explained above, Shibata discloses detecting luminance value of each individual region R;(0046,0056,0070) however, the cited art fails to specifically discloses a testing operation comparing the luminance map with measured luminance map. However, Fdhal discloses compare measured luminance map(s) 28 to desired luminance map(s) 34 (0027)
It would have been obvious to one of ordinary skill in the before the effective filing date to have modified the cited art with the cited disclosed features of Fdhal et al since it would have provided the benefit of improve illumination uniformity across viewing area. (0027)
Thus, Examiner’s rejection clearly addresses the multiple training operations claimed in claim 12 and the 103 rejection remains for this reason.
D) All other arguments on page 9 that were not addressed by the Examiner, are referring to the dependent claims which are in reference or depend to the topics above, thus the rationale above can be used to respond to the similar arguments and/or Examiner's explanation used in the rejection of those claims as described in the rejections above.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
If the Applicant chooses to amend the claims in future filings, the Examiner kindly states any new limitation(s) added to the claims must be described in the specification in such a way as to reasonably convey to one skilled in the relevant art in order to meet the written description requirement of 35 USC 112, first paragraph. To help expedite prosecution, promote compact prosecution and prevent a possible 112(a)/first paragraph rejection, the Examiner respectfully requests for each new limitation added to the claims in a future filing by the Applicant that the Applicant would cite the location within the specification showing support for that new limitation within the remarks. In addition, MPEP 2163.04(I)(B) states that a prima facie under 112(a)/first paragraph may be established if a claim has been added or amended, the support for the added limitation is not apparent, and applicant has not pointed out where added the limitation is supported.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVID FABER whose telephone number is (571)272-2751. The examiner can normally be reached Monday - Thursday.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. Please refer to MPEP 713.09 for scheduling interviews after the mailing of this office action.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Adam Queler can be reached at 5712724140. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ADAM M QUELER/Supervisory Patent Examiner, Art Unit 2172
/D.F/Examiner, Art Unit 2172