Prosecution Insights
Last updated: April 19, 2026
Application No. 18/621,917

IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND NON-TRANSITORY STORAGE MEDIUM

Non-Final OA §103
Filed
Mar 29, 2024
Examiner
GARCIA, SANTIAGO
Art Unit
2673
Tech Center
2600 — Communications
Assignee
Lapis Technology Co., Ltd.
OA Round
1 (Non-Final)
88%
Grant Probability
Favorable
1-2
OA Rounds
2y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 88% — above average
88%
Career Allow Rate
895 granted / 1015 resolved
+26.2% vs TC avg
Moderate +13% lift
Without
With
+12.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
21 currently pending
Career history
1036
Total Applications
across all art units

Statute-Specific Performance

§101
7.6%
-32.4% vs TC avg
§103
60.2%
+20.2% vs TC avg
§102
18.7%
-21.3% vs TC avg
§112
2.3%
-37.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1015 resolved cases

Office Action

§103
DETAILED ACTION Notice of AIA Status The present application is being examined under the AIA the first inventor to file provisions. Priority Receipt is acknowledged of certified copies of papers submitted under 35 U.S.C. 119(a)-(d), which papers have been placed of record in the file. Information Disclosure Statement The information disclosure statements (IDS) submitted on 03/29/2024 have being considered by the examiner. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-8 are rejected under 35 U.S.C. 103 as being unpatentable over Yamamoto (US 2021/0160432) in view of Kobayashi (US 2018/0139391). As per claims 1 and 7-8, Yamamoto teaches, an image processing device, method and a non-transitory storage medium comprising a memory, and a processor coupled to the memory (Yamamoto, ¶[0044] “a read only memory (ROM) 24b, a random access memory (RAM) 24c,” this represents the memory, and ¶[0013] “FIG. 1 is a schematic plan view of an exemplary vehicle that can be equipped with an image processing device according to an embodiment;” represents image processing device), wherein the processor is configured to: acquire an image aiding a field of view of a driver of a vehicle (Yamamoto, fig.1 and ¶[0041] “The image processing device of the present embodiment performs computation or image processing to image data generated by the imagers 14 to generate an image having a wider viewing angle or a virtual image of the vehicle 10 viewed from above, front, or laterally (e.g., a bird's-eye image (plan image), a lateral-view image, or a front-view image).” These images as seen all the cameras in fig.1 would be to aid the driver in field of view. As the same in fig.1 of applicant), and time information of a point in time when the image was captured (Yamamoto, ¶ [0055] “The particular mode determiner 31 determines the brightness around the vehicle as the particular state of luminance when the luminance of the lateral images of the vehicle 10 (the imaging region 36SL and the imaging region 36SR) at the time of imaging satisfies at least two conditions.” “at the time of imaging satisfies” represents time information of a point in time and “FIG. 23 is a schematic view of an exemplary state of luminance at the time of imaging the imaging region 36SL and the imaging region 36SR lateral to the vehicle 10 when the particular state of luminance is determined.” ); generate a synthesized function using the two or more conversion functions selected; and convert the luminance of the image using the synthesized function (Yamamoto, ¶[0054] “[0054] The particular mode determiner 31 determines whether the state of the ambient brightness of the vehicle 10 is a normal state of luminance or a particular state of luminance. The normal state of luminance refers to the state that it is unlikely that the images exhibit a large difference in luminance thereamong. In a light environment or presence of lights such as proper illumination around the vehicle 10, for example, a large difference in luminance is unlikely to occur among the images generated by the imagers 14. For example, in the daytime, the images may exhibit a difference in luminance due to shadows, however, the images do not exhibit a large difference in luminance after diaphragm adjustment. Similarly, in an environment around the vehicle 10 with light of, for example, street lamps or the headlights of other vehicles at night, it is unlikely that a large difference in luminance occurs among the images after diaphragm adjustment. Meanwhile, the particular state of luminance refers to the state that it is likely that the images exhibit a large difference in luminance thereamong. For example, in a completely dark environment around the vehicle 10 with no light such as street lamps or lighting of other vehicles at night (e.g., in a mountain during night-time), the vehicle 10 is located alone with the headlights ON. In such a case, only the area ahead of the vehicle 10 is light and the area behind the vehicle 10 is dark, resulting in a very large difference in luminance.” This represents generate a synthesized function using the two or more conversion functions selected; and convert the luminance of the image using the synthesized function since at different time of the day different adjustments need to me made at different time periods of the day). Yamamoto doesn’t clearly teach, however Kobayashi teaches, select, from a plurality of conversion functions that convert luminance of the image using mutually different characteristics and that include two or more time band specific conversion functions determined specifically for time bands in a one day period, two or more of the conversion functions including at least one of the two or more time band specific conversion functions according to the time information (Kobayashi, ¶[0044] “According to human visual characteristics, since recognizable contrast step changes depending on luminance, the model function S0 nonlinearly changes with respect to the object luminance expressed on the horizontal axis. On the upper side of the model function S0, since the contrast step with respect to the change amount ΔL of the luminance is large, it is possible to recognize the change amount of the luminance per one code value difference. On the other hand, on the lower side of the model function S0, since the contrast step with respect to the change amount ΔL of the luminance is small, it is impossible to recognize the change amount of the luminance per one code value difference. That is, in the area below the model function S0, the gradation is wasted.” Horizontal axis would be the different time periods, and changing as the time goes on this would represents conversion functions as the vertical axis time passes, and this particular function is being selected as it is used. Two or more band splits would be two different time periods on the vertical axis and this would specific to time. Any two or more time periods of this function in the vertical axis is considered to be this. As the same as the function in fig.3, showing the time in vertical axis as shown by the T1, T0 and T2 of applicant’s application). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Kobayashi to select, from a plurality of conversion functions that convert luminance of the image from different time periods as a function of Yamamoto’s system to improve vision of system. The motivation would have been to improve being able to be detected by a human as taught by Kobayashi ¶ [0008] The present invention has been made in consideration of the above situation, and efficiently encodes image signals in consideration of human visual characteristics. As per claim 2, Yamamoto in view of Kobayashi teaches, the image processing device of claim 1, wherein: the plurality of conversion functions include two or more illumination specific conversion functions determined specifically for illumination in an environment captured by the image; and the processor is configured to: acquire illumination information indicating illumination in the environment captured by the image (Yamamoto, ¶[0054] “For example, in the daytime, the images may exhibit a difference in luminance due to shadows, however, the images do not exhibit a large difference in luminance after diaphragm adjustment.” Illumination at different time periods and different environments light day light or street lamps); and from the plurality of conversion functions, select two or more of the conversion functions including at least one of the illumination specific conversion functions according to the illumination information (Yamamoto, ¶[0054] “Similarly, in an environment around the vehicle 10 with light of, for example, street lamps or the headlights of other vehicles at night, it is unlikely that a large difference in luminance occurs among the images after diaphragm adjustment.” Different environments require different adjustments). As per claim 3, Yamamoto in view of Kobayashi teaches, the image processing device of claim 1, wherein: the plurality of conversion functions includes a global conversion function that converts luminance uniformly for an entirety of the image; and the processor is configured to select two or more of the conversion functions including at least one of the two or more time band specific conversion functions and the global conversion function (Yamamoto, ¶ [0055] “The particular mode determiner 31 determines the brightness around the vehicle as the particular state of luminance when the luminance of the lateral images of the vehicle 10 (the imaging region 36SL and the imaging region 36SR) at the time of imaging satisfies at least two conditions.” The global conversion would be adjustments to all of the images if everything is uniform such as in this case, and then when day light then the adjustment according to these different variables or the same depending on illumination differences). As per claim 4, Yamamoto in view of Kobayashi teaches, the image processing device according to claim 3, wherein: the processor is configured to generate the global conversion function based on a luminance histogram indicating a distribution of respective luminances for each pixel in the image (Yamamoto, ¶[0052] “The luminance of the regions of interest 40 refers to, for example, the average of luminance of pixels included in the corresponding regions of interest 40.” This represents luminance histogram indicating a distribution of respective luminances for each pixel in the image since the average is taken every single liminances for each pixel must be known). As per claim 5, Yamamoto in view of Kobayashi teaches, the image processing device of claim 1, wherein: the plurality of conversion functions include a local conversion function that converts luminance by part of the image (Yamamoto, ¶ [0051] “According to the present embodiment, for example, in the processing for focusing on the imaging region 36F, one (e.g., the imaging region 36F) of a pair of imaging regions 36 (e.g., the imaging region 36F and the imaging region 36R) spaced apart across the vehicle 10 may be referred to as a first imaging region. One (e.g., the imaging region 36SL) of the pair of imaging regions 36 (e.g., the imaging region 36SL and the imaging region 36SR) adjacent to the first imaging region may be referred to as a second imaging region.” This represents a local conversion function ); and the processor is configured to select two or more of the conversion functions including at least one of the two or more time band specific conversion functions and the local conversion function (Yamamoto, ¶[0054] “For example, in the daytime, the images may exhibit a difference in luminance due to shadows, however, the images do not exhibit a large difference in luminance after diaphragm adjustment. Similarly, in an environment around the vehicle 10 with light of, for example, street lamps or the headlights of other vehicles at night, it is unlikely that a large difference in luminance occurs among the images after diaphragm adjustment.” Time specific represents daytime image adjusters). As per claim 6, Yamamoto in view of Kobayashi teaches, the image processing device of claim 5, wherein the processor is configured to generate the local conversion function based on a luminance histogram indicating a distribution of respective luminances for each pixel by part the image (Yamamoto, ¶[0052] “The luminance of the regions of interest 40 refers to, for example, the average of luminance of pixels included in the corresponding regions of interest 40.” This represents luminance histogram indicating a distribution of respective luminances for each pixel in the image since the average is taken every single luminance for each pixel must be known). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to SANTIAGO GARCIA whose telephone number is (571)270-5182. The examiner can normally be reached Monday-Friday 9:30am-5:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chineyere Wills-Burns can be reached at (571) 272-9752. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SANTIAGO GARCIA/Primary Examiner, Art Unit 2673 /SG/
Read full office action

Prosecution Timeline

Mar 29, 2024
Application Filed
Feb 04, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12599912
Method for controlling and/or regulating the feed of material to be processed to a crushing and/or screening plant of a material processing device
2y 5m to grant Granted Apr 14, 2026
Patent 12598596
CHANNEL SELECTION BASED ON MULTI-HOP NEIGHBORING-ACCESS-POINT FEEDBACK
2y 5m to grant Granted Apr 07, 2026
Patent 12587818
DEVICE AND ROLE BASED USER AUTHENTICATION
2y 5m to grant Granted Mar 24, 2026
Patent 12574708
COMMUNICATION FOR USER EQUIPMENT GROUPS
2y 5m to grant Granted Mar 10, 2026
Patent 12574764
CLIENT COOPERATIVE TROUBLESHOOTING
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
88%
Grant Probability
99%
With Interview (+12.8%)
2y 5m
Median Time to Grant
Low
PTA Risk
Based on 1015 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month