Prosecution Insights
Last updated: April 19, 2026
Application No. 18/149,579

INFORMATION PROCESSING APPARATUS, METHOD, AND STORAGE MEDIUM TO GENERATE A COLOR-DIFFERENCE MAP

Non-Final OA §101§103
Filed
Jan 03, 2023
Examiner
ELLIOTT, JORDAN MCKENZIE
Art Unit
2666
Tech Center
2600 — Communications
Assignee
Canon Kabushiki Kaisha
OA Round
3 (Non-Final)
45%
Grant Probability
Moderate
3-4
OA Rounds
2y 10m
To Grant
31%
With Interview

Examiner Intelligence

Grants 45% of resolved cases
45%
Career Allow Rate
9 granted / 20 resolved
-17.0% vs TC avg
Minimal -14% lift
Without
With
+-13.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
40 currently pending
Career history
60
Total Applications
across all art units

Statute-Specific Performance

§101
8.9%
-31.1% vs TC avg
§103
53.3%
+13.3% vs TC avg
§102
27.1%
-12.9% vs TC avg
§112
10.7%
-29.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 20 resolved cases

Office Action

§101 §103
DETAILED ACTION Claims 1-3, 5-8, 20 and 22 are pending in this application and have been given the foreign priority date of 01/06/2022 in accordance with the applicant’s claim for foreign priority. Claims 1-3, 5-8, 20 and 22 have been amended, and claims 4, 10-19, 21 and 23 have been canceled. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). Information Disclosure Statement The information disclosure statement (IDS) submitted on 01/03/2023 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 2/06/2026 has been entered. Response to Arguments 35 U.S.C. 101 The applicant’s arguments (See Remarks filed 02/06/2026) regarding the rejection made under 35 U.S.C. 101 have been fully considered by the examiner and are not persuasive. The examiner disagrees that the addition of analysis of pixels on the line segments to the claims as argued by the applicant (See Remarks filed 02/06/2026) reflect a particular solution to the problem distinguishing it from an abstract idea or mental process, nor does it meaningfully translate the claim into practical application. Taking claim 1 as example, the claim recites; “An information processing apparatus for inspecting a distribution of colors of a subject to be inspected, comprising: (additional element of an apparatus which is recited with a high level of generality) one or more memories storing instructions; (additional element of a memory which is recited with a high level of generality) and one or more processors executing the instructions to function as: (additional element of a processor which is recited with a high level of generality) a first setting unit configured to set, in response to a user instruction, a first line segment in a captured image of the subject (A step of mere data gathering, in which a human could practically draw a line on a screen across an image) a second setting unit configured to set plural second line segments different from the first line segment and each having a point of intersection with the first line segment in the captured image (A step of mere data gathering, in which a human could practically draw intersecting lines on an image) a color difference acquisition unit configured to acquire color differences in a color space between pixels on the first line segment on the point of intersection and pixels on the second line segments on the point of intersection (Mental process in which a person could select points on the lines which intersect and mathematically determine the color changes) and a generation unit configured to generate a color-difference map for displaying a distribution of the color differences of the pixels on the plural second line segments as a two-dimensional image corresponding to the captured image (Mental process in which a human could draw a plot showing the color changes)” Under step 2A prong 1, one of ordinary skill in the art could practically perform the steps as a mental process by clicking on an image with a mouse to create two line segments, assess visually the color difference between points on the two segments and then create a map to show where the differences as a color map via drawing or other methods. Under step 2A prong 2, the claim does recite the additional elements of a processors and a memory functioning as a first and second setting unit, a color difference acquisition unit and a generation unit. These limitations, however, fail to integrate the claim limitation into a practical application because the units are placeholders which are performing tasks that could performed as a mental process of comparison or selection performed by a person, or as steps of mere data gathering. Under step 2B the plurality of units, processor and memory are recited with a high level of generality and do not integrate the claim into practical application or amount to significantly more. Therefore claim 1 is rejected as an abstract idea. The examiner encourages the applicant to amend the claims to add in elements which meaningfully translate the claims into practical application such as an elaboration of the methods of generating the color difference map or the analysis used to determine the color differences at the points of intersection. 35 U.S.C. 112(b) The applicant’s arguments (See Remarks filed 02/06/2026) regarding the rejection made under 35 U.S.C. 112(b) have been fully considered by the examiner and are persuasive. The examiner agrees to withdraw the rejections made under 35 U.S.C. 112(b) in view of the amendments to the claims. 35 U.S.C. 103 The applicant’s arguments (See Remarks filed 02/06/2026) regarding the rejection made under 35 U.S.C. 103 have been fully considered by the examiner and are not persuasive. Applicant argues (see page 9 of Remarks filed 02/06/2026) that Yoshiura fails to teach a method which multiple lines or line segments having a point of intersection with a first line. The examiner disagrees, Yoshiura teaches in column 12 lines 4-37 that when a pattern is detected in the image, multiple lines are generated passing through the edge code (EC), which is line or boundary line of a shape, and multiple lines of a predetermined amount are generated intersecting this line or boundary line of a shape, indicating a plurality of lines may be used to generate the color differences. One of ordinary skill in the art would understand this as being analogous to the claimed limitation as discussed above. PNG media_image1.png 406 282 media_image1.png Greyscale (Yoshiura, Column 12 lines 4-37 emphasis added) Applicant further argues (see page 9 of Remarks filed 02/06/2026) that Yoshiura does not reasonably disclose acquiring color difference between pixels on first and second lines. The examiner disagrees, Yoshiura teaches in column 9 line 60 though column 10 line 25, that two lines, E1 and E2 are used to determine the color difference of a focal pixel, which lines on the intersection of the two lines which is analogous to finding color differences between pixels on the first and second lines or line segments, as pixels along the lines are used to determine the color differences. Therefore, for at least the reasons discussed above, the examiner maintains the 103 rejections over Yoshiura in view of Yamato. PNG media_image2.png 122 280 media_image2.png Greyscale PNG media_image3.png 314 282 media_image3.png Greyscale (Yoshiura, Columns 9 and 10 Emphasis added) Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-3, 5-9, 20 and 22 rejected under 35 U.S.C. 101 because the claimed invention is directed to the mental process of setting a region in an image and analysis of the difference in color without translation to practical application or significantly more. Regarding claim 1, claim 1 recites “An information processing apparatus for inspecting a distribution of colors of a subject to be inspected, comprising: one or more memories storing instructions; and one or more processors executing the instructions to function as: a first setting unit configured to set, in response to a user instruction, a first line segment in a captured image of the subject a second setting unit configured to set plural second line segments different from the first line segment and each having a point of intersection with the first line segment in the captured image a color difference acquisition unit configured to acquire color differences in a color space between pixels on the first line segment on the point of intersection and pixels on the second line segments on the point of intersection and a generation unit configured to generate a color-difference map for displaying a distribution of the color differences of the pixels on the plural second line segments as a two-dimensional image corresponding to the captured image Under step 2A prong 1, one of ordinary skill in the art could practically perform the steps as a mental process by clicking on an image with a mouse to create two line segments, assess visually the color difference between points on the two segments and then create a map to show where the differences as a color map via drawing or other methods. Under step 2A prong 2, the claim does recite the additional elements of a processors and a memory functioning as a first and second setting unit, a color difference acquisition unit and a generation unit. These limitations, however, fail to integrate the claim limitation into a practical application because the units are placeholders which are performing tasks that could performed as a mental process of comparison or selection performed by a person, or as steps of mere data gathering. Under step 2B the plurality of units are recited with a high level of generality and do not integrate the claim into practical application or amount to significantly more. Therefore claim 1 is rejected as an abstract idea. Claims 2-3, and 5-9, are drawn to the same abstract idea of a mental process for selection of a region of interest and comparison of color. Regarding claim 2, claim 2 follows the same logic as claim 1 and fails to recite additional elements that integrate the claim into practical application or amount to significantly more. Regarding claim 3, claim 3 follows the same logic as claim 1 and fails to recite additional elements that integrate the claim into practical application or amount to significantly more. Regarding claim 5, claim 5 follows the same logic as claim 1 and includes the additional element of the “setting unit”. This additional element neither integrates the claim into practical application nor amounts to significantly more. Regarding claim 6, claim 6 follows the same logic as claim 1 and includes the additional element of the “setting unit”. This additional element neither integrates the claim into practical application nor amounts to significantly more. Regarding claim 7, claim 7 follows the same logic as claim 1 and includes the additional elements of the “setting unit” and the “rectangle setting unit”. These additional elements neither integrates the claim into practical application nor amounts to significantly more. Regarding claim 8, claim 8 follows the same logic as claim 1 and includes the additional element of the “setting unit”. This additional element neither integrates the claim into practical application nor amounts to significantly more. Regarding claim 9, claim 9 follows the same logic as claim 1 and fails to recite additional elements that integrate the claim into practical application or amount to significantly more. Regarding claim 20, claim 20 recites; “An information processing method for inspecting a distribution of colors of a subject to be inspected, implemented by one or more memories storing instructions; and one or more processors executing the instructions, the method comprising: setting, in response to a user instruction, a first line segment in a captured image of the subject setting plural second line segments different from the first line segment and each having a point of intersection with the first line segment in the captured image acquiring color differences in a color space between pixels on the first line segment on the point of intersection and pixels on the second line segments on the point of intersection and generating a color-difference map for displaying a distribution of the color differences of the pixels on the plural second line segments as a two-dimensional image corresponding to the captured image Under step 2A prong 2 the claim does recite the additional elements of a processor and a memory. These limitations, however, fail to integrate the claim limitation into a practical application because the units are placeholders which are performing tasks that could performed as a mental process of comparison or selection performed by a person, or as steps of mere data gathering. Under step 2B the additional elements are recited with a high level of generality and do not integrate the claim into practical application or amount to significantly more. Therefore claim 20 is rejected as an abstract idea. Regarding claim 22, claim 22 recites; “A non-transitory computer-readable storage medium storing instructions that, when executed by a computer comprising one or more memories storing instructions, and one or more processors executing the instructions, cause the computer to perform an information processing method for inspecting a distribution of colors of a subject to be inspected, the method comprising: setting, in response to a user instruction, a first line segment in a captured image of the subject setting plural second line segments different from the first line segment and each having a point of intersection with the first line segment in the captured image acquiring color differences in a color space between pixels on the first line segment on the point of intersection and pixels on the second line segments on the point of intersection and generating a color-difference map for displaying a distribution of the color differences of the pixels on the plural second line segments as a two-dimensional image corresponding to the captured image Under step 2A prong 1, one of ordinary skill in the art could practically perform the steps as a mental process by clicking on an image with a mouse to create two line segments, assess visually the color difference between points on the two segments and then create a map to show where the differences as a color map. Under step 2A prong 2 the claim does recite the additional elements of a processor and a memory. These limitations, however, fail to integrate the claim limitation into a practical application because the units are placeholders which are performing tasks that could performed as a mental process of comparison or selection performed by a person, or as steps of mere data gathering. Under step 2B the additional elements are recited with a high level of generality and do not integrate the claim into practical application or amount to significantly more. Therefore claim 22 is rejected as an abstract idea. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. 1. Claims 1-3, 5, 9, 20 and 22 are rejected under 35 U.S.C. 103 as being unpatentable over Yoshiura (US 7747072 B2) in view of Yamamoto (US 10942134 B2). Regarding claim 1 Yoshiura discloses; An information processing apparatus for inspecting a distribution of colors of a subject to be inspected, comprising: one or more memories storing instructions (Yoshiura, Figure 2, flash memory, Column 7 line 35 through column 8 line 25, the CPU and flash memory interface to execute the image processing); and one or more processors executing the instructions to function as (Yoshiura, Column 7 line 35 through column 8 line 25, the CPU and flash memory interface to execute the image processing): [ a first setting unit configured to set, in response to a user instruction, a first line segment in a captured image of the subject ] a second setting unit (Yoshiura, Column 7, Lines 44- Column 8 line 10, system has a CPU and a memory for executing instructions, [0043] of the present application defines a unit as hardware of software with the functions ascribed to the unit therefore this is functionally equivalent to the units described) configured to set plural second line segments different from the first line segment and each having a point of intersection with the first line segment in the captured image (Yoshiura, Column 12, lines 30-37, a second line is set, two lines are set in both directions, therefore the first and second lines are different, According to [0066]-[0072] of the applicant’s specification the line setting “correspondence relationship” is just stating that the lines are separate and correspond to an input of two lines or the points to generate them, therefore Yoshiura column 12 lines 30-37 would be analogous to this because two lines in different directions are set, and they are separate from one another, but correspond to two separate directions and are indicated by points set by a CPU (setting unit), further, Figure 5 shows two intersecting lines E1 and E2, Column 9 line 60 through column 10 line 25 discloses using the lines E1 and E2 to determine the color difference for the focal pixel which is at the intersection of the lines (pixel C), further column 12 lines 4-30 details that in certain cases, such as a pattern being detected, multiple lines/lines of a predetermined number cross the edge code (line) indicating multiple second lines/line segments being used to determine the color difference map) PNG media_image4.png 362 284 media_image4.png Greyscale (Yoshiura, column 12, emphasis added) PNG media_image5.png 106 316 media_image5.png Greyscale (Column 12, Lines 30-37 emphasis added) PNG media_image6.png 336 502 media_image6.png Greyscale (Yoshiura, Figure 5) PNG media_image2.png 122 280 media_image2.png Greyscale PNG media_image3.png 314 282 media_image3.png Greyscale (Yoshiura, columns 9 and 10 emphasis added) a color difference acquisition unit configured to acquire color differences in a color space between pixels on the first line segment on the point of intersection and pixels on the second line segments on the point of intersection (Yoshiura, Figure 5 shows two intersecting lines E1 and E2, Column 9 line 60 through column 10 line 25 discloses using the lines E1 and E2 to determine the color difference for the focal pixel which is at the intersection of the lines (pixel C), where color differences along each line are determined directionally) and a generation unit configured to generate a color-difference map for displaying a distribution of the color differences of the pixels on the plural second line segments as a two-dimensional image corresponding to the captured image (Yoshiura, Column 2, Lines 13-21, the color difference in an image is determined and plotted based on the color difference between the points, column 8 lines 33-36 the color differences are calculated in both X and Y directions, which correspond to the directions the lines are set). PNG media_image7.png 120 320 media_image7.png Greyscale (Column 2, emphasis added) PNG media_image8.png 58 336 media_image8.png Greyscale (Column 8 emphasis added) Yoshiura does not teach; a first setting unit configured to set, in response to a user instruction, a first line segment in a captured image of the subject However, Yamamoto teaches; a first setting unit configured to set, in response to a user instruction, a first line segment in a captured image of the subject (Yamamoto, Column 15 lines 44-67 and column 16 lines 1-5 the reference designation point setting unit (first setting unit) determines multiple points designated by the user, and then in step S102 the reference line passing through the coordinates is determined, further the lines value is determined, figure 4, a first and second reference point are set by the user, a line is generated from this (first line segment), then this line is measured and calculated to have a measurement result, Column 32 lines 15-30 the points to generate the lines are set a predetermined number of pixels away from the region boundary); The combination of Yoshiura and Yamamoto would have been obvious to one of ordinary skill in the art prior to the effective filing date of the presently claimed invention. The method of Yoshiura teaches a method of generating a color difference map using generated lines, however it does not teach user input for determining the line locations. Yamamoto teaches the deficiency of the user input, which would improve the system of Yoshiura by allowing the user to control the color map generation via the line placement. (Applicant’s specification [0042], [0048], [0053] and [0060]-[0071]) Regarding claim 2 the combination of Yoshiura and Yamamoto teaches; The information processing apparatus according to claim 1, wherein each second line segment is a line segment perpendicular to the first line segment (Yoshiura, Column 12, Lines 22-30, a first edge/line is set, when the opposite edge is determined another lines is set perpendicular to the first line). PNG media_image9.png 124 316 media_image9.png Greyscale (Column 12, emphasis added) Regarding claim 3 the combination of Yoshiura and Yamamoto teaches; The information processing apparatus according to claim 1, wherein each second line segment is a line segment corresponding to a vertical direction or a horizontal direction of the image (Yoshiura, Column 11, lines 38-45, two lines (Cx and Cy) are computed which represent the color differences in the X and Y directions respectively, given that Cx and Cy are shown as functions of X and Y to be plotted on a graph, they would be two separate lines corresponding to an X and Y direction respectively). PNG media_image10.png 108 306 media_image10.png Greyscale (Column 11, emphasis added) Regarding claim 5 the combination of Yoshiura and Yamamoto teaches; The information processing apparatus according to claim 1, wherein the second setting unit (Yoshiura, Column 7, Lines 44- Column 8 line 10, system has a CPU and a memory for executing instructions, [0043] of the present application defines a unit as hardware of software with the functions ascribed to the unit therefore this is functionally equivalent to the units described) sets each second line segment based on a starting point and an ending point which have been set (Yoshiura, Column 12, lines 25-30, a line having a predetermined length is set along a direction crossing the lines EC (x,y), given that the line has a predetermined length, it must also have a predetermined start and end point). PNG media_image11.png 120 330 media_image11.png Greyscale (Column 12, emphasis added) Regarding claim 9 the combination of Yoshiura and Yamamoto teaches; The information processing apparatus according to claim 1, wherein the first line segment is a line segment configured with a starting point, an ending point (Yoshiura, Column 5, lines 5-17, the color difference is calculated between points, such as 2 points on a line), and one or more nodes present between the starting point and the ending point (Yoshiura, Column 5, lines 5-17, the color difference is calculated between points, such as 2 points on a line, given that the lines in the coordinate system have a plurality of points on them, and any two points can be selected there must be at least one node present. In paragraph [0197] of the Applicant’s specification the applicant defines a node as a set point on the reference line, therefore any point on the line could be equivalent to a node if it was a point of interest for analysis. ). PNG media_image12.png 198 330 media_image12.png Greyscale (Column 5, emphasis added) Regarding claim 20 the combination of Yoshiura and Yamamoto teaches; An information processing method for inspecting a distribution of colors of a subject to be inspected, implemented by one or more memories storing instructions (Yoshiura, Figure 2, flash memory, Column 7 line 35 through column 8 line 25, the CPU and flash memory interface to execute the image processing); and one or more processors executing the instructions, the method comprising (Yoshiura, Column 7 line 35 through column 8 line 25, the CPU and flash memory interface to execute the image processing): setting, in response to a user instruction, a first line segment in a captured image of the subject (Yamamoto, Column 15 lines 44-67 and column 16 lines 1-5 the reference designation point setting unit (first setting unit) determines multiple points designated by the user, and then in step S102 the reference line passing through the coordinates is determined, further the lines value is determined, figure 4, a first and second reference point are set by the user, a line is generated from this (first line segment), then this line is measured and calculated to have a measurement result, Column 32 lines 15-30 the points to generate the lines are set a predetermined number of pixels away from the region boundary) setting plural second line segments different from the first line segment and each having a point of intersection with the first line segment in the captured image (Yoshiura, Column 12, lines 30-37, a second line is set, two lines are set in both directions, therefore the first and second lines are different, According to [0066]-[0072] of the applicant’s specification the line setting “correspondence relationship” is just stating that the lines are separate and correspond to an input of two lines or the points to generate them, therefore Yoshiura column 12 lines 30-37 would be analogous to this because two lines in different directions are set, and they are separate from one another, but correspond to two separate directions and are indicated by points set by a CPU (setting unit), further, Figure 5 shows two intersecting lines E1 and E2, Column 9 line 60 through column 10 line 25 discloses using the lines E1 and E2 to determine the color difference for the focal pixel which is at the intersection of the lines (pixel C), further column 12 lines 4-30 details that in certain cases, such as a pattern being detected, multiple lines/lines of a predetermined number cross the edge code (line) indicating multiple second lines/line segments being used to determine the color difference map) acquiring color differences in a color space between pixels on the first line segment on the point of intersection and pixels on the second line segments on the point of intersection (Yoshiura, Figure 5 shows two intersecting lines E1 and E2, Column 9 line 60 through column 10 line 25 discloses using the lines E1 and E2 to determine the color difference for the focal pixel which is at the intersection of the lines (pixel C), where color differences along each line are determined directionally) and generating a color-difference map for displaying a distribution of the color differences of the pixels on the plural second line segments as a two-dimensional image corresponding to the captured image (Yoshiura, Column 2, Lines 13-21, the color difference in an image is determined and plotted based on the color difference between the points, column 8 lines 33-36 the color differences are calculated in both X and Y directions, which correspond to the directions the lines are set). The combination of Yoshiura and Yamamoto would have been obvious to one of ordinary skill in the art prior to the effective filing date of the presently claimed invention. The method of Yoshiura teaches a method of generating a color difference map using generated lines, however it does not teach user input for determining the line locations. Yamamoto teaches the deficiency of the user input, which would improve the system of Yoshiura by allowing the user to control the color map generation via the line placement. (Applicant’s specification [0042], [0048], [0053] and [0060]-[0071]) Regarding claim 22 the combination of Yoshiura and Yamamoto teaches; A non-transitory computer-readable storage medium storing instructions that, when executed by a computer comprising one or more memories storing instructions, and one or more processors executing the instructions, cause the computer to perform an information processing method for inspecting a distribution of colors of a subject to be inspected, the method comprising (Yoshiura, Figure 2, flash memory, Column 7 line 35 through column 8 line 25, the CPU and flash memory interface to execute the image processing, Column 7, Lines 44- Column 8 line 10, system has a CPU and a memory for executing instructions): setting, in response to a user instruction, a first line segment in a captured image of the subject (Yamamoto, Column 15 lines 44-67 and column 16 lines 1-5 the reference designation point setting unit (first setting unit) determines multiple points designated by the user, and then in step S102 the reference line passing through the coordinates is determined, further the lines value is determined, figure 4, a first and second reference point are set by the user, a line is generated from this (first line segment), then this line is measured and calculated to have a measurement result, Column 32 lines 15-30 the points to generate the lines are set a predetermined number of pixels away from the region boundary) setting plural second line segments different from the first line segment and each having a point of intersection with the first line segment in the captured image (Yoshiura, Column 12, lines 30-37, a second line is set, two lines are set in both directions, therefore the first and second lines are different, According to [0066]-[0072] of the applicant’s specification the line setting “correspondence relationship” is just stating that the lines are separate and correspond to an input of two lines or the points to generate them, therefore Yoshiura column 12 lines 30-37 would be analogous to this because two lines in different directions are set, and they are separate from one another, but correspond to two separate directions and are indicated by points set by a CPU (setting unit), further, Figure 5 shows two intersecting lines E1 and E2, Column 9 line 60 through column 10 line 25 discloses using the lines E1 and E2 to determine the color difference for the focal pixel which is at the intersection of the lines (pixel C), further column 12 lines 4-30 details that in certain cases, such as a pattern being detected, multiple lines/lines of a predetermined number cross the edge code (line) indicating multiple second lines/line segments being used to determine the color difference map) acquiring color differences in a color space between pixels on the first line segment on the point of intersection and pixels on the second line segments on the point of intersection (Yoshiura, Figure 5 shows two intersecting lines E1 and E2, Column 9 line 60 through column 10 line 25 discloses using the lines E1 and E2 to determine the color difference for the focal pixel which is at the intersection of the lines (pixel C), where color differences along each line are determined directionally) and generating a color-difference map for displaying a distribution of the color differences of the pixels on the plural second line segments as a two-dimensional image corresponding to the captured image (Yoshiura, Column 2, Lines 13-21, the color difference in an image is determined and plotted based on the color difference between the points, column 8 lines 33-36 the color differences are calculated in both X and Y directions, which correspond to the directions the lines are set). The combination of Yoshiura and Yamamoto would have been obvious to one of ordinary skill in the art prior to the effective filing date of the presently claimed invention. The method of Yoshiura teaches a method of generating a color difference map using generated lines, however it does not teach user input for determining the line locations. Yamamoto teaches the deficiency of the user input, which would improve the system of Yoshiura by allowing the user to control the color map generation via the line placement. (Applicant’s specification [0042], [0048], [0053] and [0060]-[0071]) 2. Claims 6-8 are rejected under 35 U.S.C. 103 as being unpatentable over Yoshiura (US 7747072 B2) in view of Yamamoto (US 10942134 B2) and in further view of Yoshikatsu (JP 2010134654). Regarding claim 6 the combination of Yoshiura and Yamamoto does not teach; The information processing apparatus according to claim 1, wherein the second setting unit sets each second line segment based on a starting point of the first line segment and a point different from any point on the first line segment. However, in the same field of endeavor, Yoshikatsu teaches; The information processing apparatus according to claim 1, wherein the second setting unit (Yoshikatsu, claim 3, Derivation Unit) sets each second line segment based on a starting point of the first line segment and a point different from any point on the first line segment (Yoshikatsu, claim 3, the derivation unit obtains a set line segment passing through the start point of the line segment). PNG media_image13.png 274 586 media_image13.png Greyscale (Yoshikatsu, claim 3) The combination of Yoshiura, Yamamoto and Yoshikatsu would have been obvious to one of ordinary skill in the art prior to the effective filing date of the presently claimed invention. The motivation for the combination lies in that the region setting method of Yoshikatsu would improve the color difference mapping functions of the method of Yoshiura and Yamamoto because the region setting method allows for a smaller region to be isolated for analysis, which can improve processing speed and precision of analysis (Yoshikatsu, [0002]-[0004]). Regarding claim 7 the combination of Yoshiura, Yamamoto and Yoshikatsu teaches; The information processing apparatus according to claim 1, further comprising a rectangle setting unit (Yoshikatsu, [0004] the drawing calculation circuit) configured to set a rectangle with respect to the captured image (Yoshikatsu, [0004] the drawing calculation circuit sets a rectangle), wherein the second setting unit sets each second line segment based on the rectangle (Yoshikatsu, claim 3, the derivation unit obtains multiple line segments for each drawing region or rectangular region). The combination of Yoshiura and Yoshikatsu would have been obvious to one of ordinary skill in the art prior to the effective filing date of the presently claimed invention. The motivation for the combination lies in that the region setting method of Yoshikatsu would improve the color difference mapping functions of the method of Yoshiura because the ability to set a specific region of interest and set the additional lines based off of this region allows for more precise analysis because a smaller region can be focused on (Yoshikatsu, [0002]-[0004]). Regarding claim 8 the combination of Yoshiura, Yamamoto and Yoshikatsu teaches; The information processing apparatus according to claim 7, wherein the second setting unit (Yoshikatsu, [0032] Orthogonal calculation unit) sets each second line segment corresponding to a vertical direction or a horizontal direction of the rectangle (Yoshikatsu, [0032] an orthogonal line segment is set to have a slope of “0”, corresponding to a horizontal line/horizontal direction, based off of either the set line or the rectangle). PNG media_image14.png 150 572 media_image14.png Greyscale (Yoshikatsu, [0032]) The combination of Yoshiura, Yamamoto and Yoshikatsu would have been obvious to one of ordinary skill in the art prior to the effective filing date of the presently claimed invention. The motivation for the combination lies in that the region setting method of Yoshikatsu would improve the color difference mapping functions of the method of Yoshiura because setting the line segment as described in Yoshikatsu would allow the positional relationship between defined regions to be set based off a specific area of the image (rectangular area). This allows for the analysis of that region to be based off local image features rather than global features. (Yoshikatsu [0032]-[0033] and [0002]-[0004]) Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Smith, US 9536322 B1, details a method of using image mapping to detect changes in color and other attributes in an image or images(s). Any inquiry concerning this communication or earlier communications from the examiner should be directed to JORDAN M ELLIOTT whose telephone number is (703)756-5463. The examiner can normally be reached M-F 8AM-5PM ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emily Terrell can be reached at (571) 270-3717. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /J.M.E./Examiner, Art Unit 2666 /EMILY C TERRELL/Supervisory Patent Examiner, Art Unit 2666
Read full office action

Prosecution Timeline

Jan 03, 2023
Application Filed
May 21, 2025
Non-Final Rejection — §101, §103
Aug 26, 2025
Response Filed
Oct 30, 2025
Final Rejection — §101, §103
Feb 06, 2026
Request for Continued Examination
Feb 17, 2026
Response after Non-Final Action
Mar 02, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12573117
METHOD AND DEVICE FOR DEEP LEARNING-BASED PATCHWISE RECONSTRUCTION FROM CLINICAL CT SCAN DATA
2y 5m to grant Granted Mar 10, 2026
Patent 12475998
SYSTEMS AND METHODS OF ADAPTIVELY GENERATING FACIAL DEVICE SELECTIONS BASED ON VISUALLY DETERMINED ANATOMICAL DIMENSION DATA
2y 5m to grant Granted Nov 18, 2025
Patent 12450918
AUTOMATIC LANE MARKING EXTRACTION AND CLASSIFICATION FROM LIDAR SCANS
2y 5m to grant Granted Oct 21, 2025
Patent 12437415
METHODS AND SYSTEMS FOR NON-DESTRUCTIVE EVALUATION OF STATOR INSULATION CONDITION
2y 5m to grant Granted Oct 07, 2025
Patent 12406358
METHODS AND SYSTEMS FOR AUTOMATED SATURATION BAND PLACEMENT
2y 5m to grant Granted Sep 02, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
45%
Grant Probability
31%
With Interview (-13.7%)
2y 10m
Median Time to Grant
High
PTA Risk
Based on 20 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month