Prosecution Insights
Last updated: April 19, 2026
Application No. 18/672,606

METHOD OF IDENTIFYING AND COLORIZING PARTIALLY COLORIZED IMAGE AND ELECTRONIC DEVICE

Non-Final OA §103
Filed
May 23, 2024
Examiner
MINKO, DENIS VASILIY
Art Unit
2612
Tech Center
2600 — Communications
Assignee
Samsung Electronics Co., Ltd.
OA Round
1 (Non-Final)
62%
Grant Probability
Moderate
1-2
OA Rounds
2y 5m
To Grant
79%
With Interview

Examiner Intelligence

Grants 62% of resolved cases
62%
Career Allow Rate
10 granted / 16 resolved
+0.5% vs TC avg
Strong +17% interview lift
Without
With
+16.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
25 currently pending
Career history
41
Total Applications
across all art units

Statute-Specific Performance

§101
9.4%
-30.6% vs TC avg
§103
61.4%
+21.4% vs TC avg
§102
18.7%
-21.3% vs TC avg
§112
9.9%
-30.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 16 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yoo et al. (US 10997752) in view of Hiskens et al. (US 20230064450). Regarding claim 1: Yoo teaches: An electronic device comprising: a display (Yoo [0098] As illustrated in FIG. 7A, in the color mode, the edge-guided colorization system 106 presents a colorization graphical user interface 706 via a display screen 704 on the user client device 702 (i.e., the user client device 108).); at least one processor, comprising processing circuitry (Yoo [0116] The components of the edge-guided colorization system 106 can include software, hardware, or both. For example, the components of the edge-guided colorization system 106 can include one or more instructions stored on a computer-readable storage medium and executable by processors of one or more computing devices (e.g., the user client device 108).); and a memory storing instructions that, when executed by at least one processor, individually and/or collectively, cause the electronic device to (Yoo [0141] Embodiments of the present disclosure may comprise or utilize a special purpose or general-purpose computer including computer hardware, such as, for example, one or more processors and system memory, as discussed in greater detail below.): identify whether an original image includes a colored image area and a black-and-white image area based on colorfulness parameters (Yoo [0012] In particular, the edge-guided colorization system can utilize selected color edges together with a greyscale image and selected color points as channel inputs to the edge-guided colorization neural network to generate a colorized digital image. By predicting and presenting interactive color edges utilizing a color edge prediction neural network and then utilizing the interactive color edges in conjunction with an edge-guided colorization neural network, the edge-guided colorization system can flexibly and efficiently generate accurate colorized digital images.); Yoo fails to teach: generate a color image wherein entire areas of the original image are colored (Hiskens [0020] An image-generating module 148 may produce a color version of the infrared image 126, which is an initial output color image 150 to be partially colorized by the colorizing module 152. Because infrared cameras output grayscale images whose pixels only have intensity values, a monochromatic color version is produced which, although initially may only have shades of gray, may have per-pixel color information. It may be convenient for the initial output color image 150 to be an HSV (hue-saturation-value) image. The values of each pixel are set according to the intensities of the corresponding pixels in the infrared image 126. In another embodiment, the initial output color image 150 is an RGB image and each pixel's three color values are set to the intensity of the corresponding pixel in the infrared image 126, thus producing a grayscale image mirroring the infrared image 126 yet amenable to changing the colors of its pixels.); determine weight values corresponding to the original image and weight values corresponding to the color image (Hiskens [0019] The feature-matching module 146 compares features in the first feature set with the features in the second feature set to find matches. The comparing may involve comparing a number of attributes of each feature. Such attributes may include tagged object types (as per previous object recognition), probability of accuracy, location, etc. Comparing may additionally or alternatively be based on shape similarity. Comparing may be as simple as comparing locations. In one embodiment each potential match may be scored as a weighted combination of matching attributes.); and synthesize the original image and the color image, based on the weight values corresponding to the original image and the weight values corresponding to the color image (Hiskens [0021] The colorizing module 152 receives the initial output color image 150 and the matched features from the feature-matching module 146. Because the initial output color image 150 and the infrared image 126 correspond pixel-by-pixel, the locations and pixels of the matched features from the infrared image are the same in the initial output color image 150. In one embodiment, the colorizing module 152 colors each matched feature (from the infrared image) in the initial output color image 150 based on the corresponding colors of the matching features from the RGB image 128.). Hiskens teaches: generate a color image wherein entire areas of the original image are colored (Hiskens [0020] An image-generating module 148 may produce a color version of the infrared image 126, which is an initial output color image 150 to be partially colorized by the colorizing module 152. Because infrared cameras output grayscale images whose pixels only have intensity values, a monochromatic color version is produced which, although initially may only have shades of gray, may have per-pixel color information. It may be convenient for the initial output color image 150 to be an HSV (hue-saturation-value) image. The values of each pixel are set according to the intensities of the corresponding pixels in the infrared image 126. In another embodiment, the initial output color image 150 is an RGB image and each pixel's three color values are set to the intensity of the corresponding pixel in the infrared image 126, thus producing a grayscale image mirroring the infrared image 126 yet amenable to changing the colors of its pixels.); determine weight values corresponding to the original image and weight values corresponding to the color image (Hiskens [0019] The feature-matching module 146 compares features in the first feature set with the features in the second feature set to find matches. The comparing may involve comparing a number of attributes of each feature. Such attributes may include tagged object types (as per previous object recognition), probability of accuracy, location, etc. Comparing may additionally or alternatively be based on shape similarity. Comparing may be as simple as comparing locations. In one embodiment each potential match may be scored as a weighted combination of matching attributes.); and synthesize the original image and the color image, based on the weight values corresponding to the original image and the weight values corresponding to the color image (Hiskens [0021] The colorizing module 152 receives the initial output color image 150 and the matched features from the feature-matching module 146. Because the initial output color image 150 and the infrared image 126 correspond pixel-by-pixel, the locations and pixels of the matched features from the infrared image are the same in the initial output color image 150. In one embodiment, the colorizing module 152 colors each matched feature (from the infrared image) in the initial output color image 150 based on the corresponding colors of the matching features from the RGB image 128.). Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Yoo with Hiskens. Having a way to combine images that are colored and black and white, as in Hiskens, would benefit the Yoo teachings by having a way to have more accurate colors. Additionally, this is the application of a known technique, having a way to combine images that are colored and black and white, to yield predictable results. Regarding claim 2. The electronic device of claim 1, Yoo and Hiskens teach: wherein the colorfulness parameters comprise: a first index quantifying colorfulness for the entire areas of the original image (Yoo [0001] To illustrate, conventional image colorizing systems can automatically add color to complex black and white or grayscale images based on user selection of particular fill colors. For example, client devices can select colors corresponding to particular areas of a grayscale image, and image colorizing systems can generate a digital image reflecting the selected colors. [0014] In particular, the edge-guided colorization system can utilize a Canny edge detection algorithm to generate ground truth edges from a chrominance image.); and a second index quantifying colorfulness for a specific area of the original image, and wherein the instructions, when executed by at least one processor, individually and/or collectively, cause the electronic device to determine that the original image includes the colored image area based on the first index having a value greater than or equal to a first value and less than a second value relatively greater than the first value, and the second index being greater than a third value and having a difference from the first index of more than a fourth value (Yoo [0001] For example, client devices can select colors corresponding to particular areas of a grayscale image, and image colorizing systems can generate a digital image reflecting the selected colors. [0082] The edge-guided colorization system 106 compares the training predicted color Ŷ 608 with the ground truth color Y 602 to adjust parameters of the edge-guided colorization neural network 606. The edge-guided colorization system 106 trains the edge-guided colorization neural network 606 parameters to map inputs training grayscale image X 604, training color points U 614, and canny edges V 612 as close to the ground truth color Y 602 as possible. In at least one embodiment, the edge-guided colorization system 106 utilizes a per-pixel custom character.sub.0 regression loss by comparing each pixel of the training predicted color Ŷ 608 with the corresponding pixel in the ground truth color Y 602. As mentioned previously, the edge-guided colorization system 106 can also utilize alternative loss functions.). Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Yoo with Hiskens. Having a way to combine images that are colored and black and white, as in Hiskens, would benefit the Yoo teachings by having a way to have more accurate colors. Additionally, this is the application of a known technique, having a way to combine images that are colored and black and white, to yield predictable results. Regarding claim 4. Yoo and Hiskens teach: The electronic device of claim 2, wherein the instructions, when executed by at least one processor, individually and/or collectively, cause the electronic device to: decrease a probability that the original image is classified as the partially colored image, by increasing the first value (Yoo [0004] Thus, conventional image colorizing systems are often required to re-analyze pixels or regions and adjust the desired colorization.); and increase the probability that the original image is classified as the partially colored image, by increasing the second value (Yoo [0004] Thus, conventional image colorizing systems are often required to re-analyze pixels or regions and adjust the desired colorization.). Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Yoo with Hiskens. Having a way to combine images that are colored and black and white, as in Hiskens, would benefit the Yoo teachings by having a way to have more accurate colors. Additionally, this is the application of a known technique, having a way to combine images that are colored and black and white, to yield predictable results. Regarding claim 5. Yoo and Hiskens teach: The electronic device of claim 2, wherein the instructions, when executed by at least one processor, individually and/or collectively, cause the electronic device to increase influence of the second index during a process of classifying the original image as the partially colored image, by increasing the third value or the fourth value, and wherein the greater influence of the second index indicates that the richness of the colors in the specific area is larger (Yoo [0004] Thus, conventional image colorizing systems are often required to re-analyze pixels or regions and adjust the desired colorization.). Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Yoo with Hiskens. Having a way to combine images that are colored and black and white, as in Hiskens, would benefit the Yoo teachings by having a way to have more accurate colors. Additionally, this is the application of a known technique, having a way to combine images that are colored and black and white, to yield predictable results. Regarding claim 11: Yoo teaches: An electronic device comprising: at least one display (Yoo [0098] As illustrated in FIG. 7A, in the color mode, the edge-guided colorization system 106 presents a colorization graphical user interface 706 via a display screen 704 on the user client device 702 (i.e., the user client device 108).); at least one processor, comprising processing circuitry (Yoo [0116] The components of the edge-guided colorization system 106 can include software, hardware, or both. For example, the components of the edge-guided colorization system 106 can include one or more instructions stored on a computer-readable storage medium and executable by processors of one or more computing devices (e.g., the user client device 108).); and a memory configured to store instructions (Yoo [0141] Embodiments of the present disclosure may comprise or utilize a special purpose or general-purpose computer including computer hardware, such as, for example, one or more processors and system memory, as discussed in greater detail below.), wherein colorfulness comprises: a first index made by quantifying colorfulness for an entire area of an original image (Yoo [0001] To illustrate, conventional image colorizing systems can automatically add color to complex black and white or grayscale images based on user selection of particular fill colors. For example, client devices can select colors corresponding to particular areas of a grayscale image, and image colorizing systems can generate a digital image reflecting the selected colors. [0014] In particular, the edge-guided colorization system can utilize a Canny edge detection algorithm to generate ground truth edges from a chrominance image.); and a second index made by quantifying colorfulness for a specific area of the original image, wherein the instructions, when executed by at least one processor, individually and/or collectively, cause the electronic device to: identify whether an input original image is a partially colorized image with an emphasized specific color in a black-and-white background or a monochrome background on the basis of the colorfulness (Hiskens [0016] The partially blended color images 132 may include mostly grayscale image data (in particular, background scenery based on data from the infrared camera 120), while features of interest may be shown colorized (based on data from the RGB camera 122) to provide the user with more information about conditions in front of the cameras.); and determine that the original image is the partially colorized image based on the first index having a value equal to or greater than a first value and less than a second value greater than the first value, the second index being greater than a third value, and a difference between the second index and the first index being greater than a fourth value (Yoo [0001] For example, client devices can select colors corresponding to particular areas of a grayscale image, and image colorizing systems can generate a digital image reflecting the selected colors. [0082] The edge-guided colorization system 106 compares the training predicted color Ŷ 608 with the ground truth color Y 602 to adjust parameters of the edge-guided colorization neural network 606. The edge-guided colorization system 106 trains the edge-guided colorization neural network 606 parameters to map inputs training grayscale image X 604, training color points U 614, and canny edges V 612 as close to the ground truth color Y 602 as possible. In at least one embodiment, the edge-guided colorization system 106 utilizes a per-pixel custom character.sub.0 regression loss by comparing each pixel of the training predicted color Ŷ 608 with the corresponding pixel in the ground truth color Y 602. As mentioned previously, the edge-guided colorization system 106 can also utilize alternative loss functions.). Yoo fails to teach: and a second index made by quantifying colorfulness for a specific area of the original image, wherein the instructions, when executed by at least one processor, individually and/or collectively, cause the electronic device to: identify whether an input original image is a partially colorized image with an emphasized specific color in a black-and-white background or a monochrome background on the basis of the colorfulness (Hiskens [0016] The partially blended color images 132 may include mostly grayscale image data (in particular, background scenery based on data from the infrared camera 120), while features of interest may be shown colorized (based on data from the RGB camera 122) to provide the user with more information about conditions in front of the cameras.); Hiskens teaches: and a second index made by quantifying colorfulness for a specific area of the original image, wherein the instructions, when executed by at least one processor, individually and/or collectively, cause the electronic device to: identify whether an input original image is a partially colorized image with an emphasized specific color in a black-and-white background or a monochrome background on the basis of the colorfulness (Hiskens [0016] The partially blended color images 132 may include mostly grayscale image data (in particular, background scenery based on data from the infrared camera 120), while features of interest may be shown colorized (based on data from the RGB camera 122) to provide the user with more information about conditions in front of the cameras.); Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Yoo with Hiskens. Having a way to combine images that are colored and black and white, as in Hiskens, would benefit the Yoo teachings by having a way to have more accurate colors. Additionally, this is the application of a known technique, having a way to combine images that are colored and black and white, to yield predictable results. Regarding claim 19. Yoo and Hiskens together teach: A method of operating an electronic device comprising: identifying whether an input original image is a partially colorized image with an emphasized specific color in a black-and-white background or a monochrome background based on colorfulness (Hiskens [0016] The partially blended color images 132 may include mostly grayscale image data (in particular, background scenery based on data from the infrared camera 120), while features of interest may be shown colorized (based on data from the RGB camera 122) to provide the user with more information about conditions in front of the cameras.) (Yoo [0012] In particular, the edge-guided colorization system can utilize selected color edges together with a greyscale image and selected color points as channel inputs to the edge-guided colorization neural network to generate a colorized digital image. By predicting and presenting interactive color edges utilizing a color edge prediction neural network and then utilizing the interactive color edges in conjunction with an edge-guided colorization neural network, the edge-guided colorization system can flexibly and efficiently generate accurate colorized digital images.); creating a tinted color image in an entire area of the original image (Hiskens [0020] An image-generating module 148 may produce a color version of the infrared image 126, which is an initial output color image 150 to be partially colorized by the colorizing module 152. Because infrared cameras output grayscale images whose pixels only have intensity values, a monochromatic color version is produced which, although initially may only have shades of gray, may have per-pixel color information. It may be convenient for the initial output color image 150 to be an HSV (hue-saturation-value) image. The values of each pixel are set according to the intensities of the corresponding pixels in the infrared image 126. In another embodiment, the initial output color image 150 is an RGB image and each pixel's three color values are set to the intensity of the corresponding pixel in the infrared image 126, thus producing a grayscale image mirroring the infrared image 126 yet amenable to changing the colors of its pixels.); determining a weight corresponding to the original image and a weight corresponding to the color image (Hiskens [0019] The feature-matching module 146 compares features in the first feature set with the features in the second feature set to find matches. The comparing may involve comparing a number of attributes of each feature. Such attributes may include tagged object types (as per previous object recognition), probability of accuracy, location, etc. Comparing may additionally or alternatively be based on shape similarity. Comparing may be as simple as comparing locations. In one embodiment each potential match may be scored as a weighted combination of matching attributes.); and synthesizing the input original image and the color image based on the determined weight (Hiskens [0021] The colorizing module 152 receives the initial output color image 150 and the matched features from the feature-matching module 146. Because the initial output color image 150 and the infrared image 126 correspond pixel-by-pixel, the locations and pixels of the matched features from the infrared image are the same in the initial output color image 150. In one embodiment, the colorizing module 152 colors each matched feature (from the infrared image) in the initial output color image 150 based on the corresponding colors of the matching features from the RGB image 128.). Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Yoo with Hiskens. Having a way to combine images that are colored and black and white, as in Hiskens, would benefit the Yoo teachings by having a way to have more accurate colors. Additionally, this is the application of a known technique, having a way to combine images that are colored and black and white, to yield predictable results. Regarding claim 20. Yoo and Hiskens teach: The method of claim 19, wherein the colorfulness comprises: a first index quantifying colorfulness for an entire area of an original image (Yoo [0001] To illustrate, conventional image colorizing systems can automatically add color to complex black and white or grayscale images based on user selection of particular fill colors. For example, client devices can select colors corresponding to particular areas of a grayscale image, and image colorizing systems can generate a digital image reflecting the selected colors. [0014] In particular, the edge-guided colorization system can utilize a Canny edge detection algorithm to generate ground truth edges from a chrominance image.); and a second index quantifying colorfulness for a specific area of the original image (Yoo [0001] For example, client devices can select colors corresponding to particular areas of a grayscale image, and image colorizing systems can generate a digital image reflecting the selected colors. [0082] The edge-guided colorization system 106 compares the training predicted color Ŷ 608 with the ground truth color Y 602 to adjust parameters of the edge-guided colorization neural network 606. The edge-guided colorization system 106 trains the edge-guided colorization neural network 606 parameters to map inputs training grayscale image X 604, training color points U 614, and canny edges V 612 as close to the ground truth color Y 602 as possible. In at least one embodiment, the edge-guided colorization system 106 utilizes a per-pixel custom character.sub.0 regression loss by comparing each pixel of the training predicted color Ŷ 608 with the corresponding pixel in the ground truth color Y 602. As mentioned previously, the edge-guided colorization system 106 can also utilize alternative loss functions.), and wherein the identifying of whether the input original image is the partially colorized image with the emphasized specific color in the black-and-white background or the monochrome background on the basis of the colorfulness further comprises (Hiskens [0016] The partially blended color images 132 may include mostly grayscale image data (in particular, background scenery based on data from the infrared camera 120), while features of interest may be shown colorized (based on data from the RGB camera 122) to provide the user with more information about conditions in front of the cameras.): determining that the original image is the partially colorized image based on a first index having a value equal to or greater than a first value and less than a second value greater than the first value, a second index being greater than a third value, and a difference between the second index and the first index being greater than a fourth value (Yoo [0001] For example, client devices can select colors corresponding to particular areas of a grayscale image, and image colorizing systems can generate a digital image reflecting the selected colors. [0082] The edge-guided colorization system 106 compares the training predicted color Ŷ 608 with the ground truth color Y 602 to adjust parameters of the edge-guided colorization neural network 606. The edge-guided colorization system 106 trains the edge-guided colorization neural network 606 parameters to map inputs training grayscale image X 604, training color points U 614, and canny edges V 612 as close to the ground truth color Y 602 as possible. In at least one embodiment, the edge-guided colorization system 106 utilizes a per-pixel custom character.sub.0 regression loss by comparing each pixel of the training predicted color Ŷ 608 with the corresponding pixel in the ground truth color Y 602. As mentioned previously, the edge-guided colorization system 106 can also utilize alternative loss functions.). Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Yoo with Hiskens. Having a way to combine images that are colored and black and white, as in Hiskens, would benefit the Yoo teachings by having a way to have more accurate colors. Additionally, this is the application of a known technique, having a way to combine images that are colored and black and white, to yield predictable results. Claim(s) 3 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yoo et al. (US 10997752) in view of Hiskens et al. (US 20230064450) and Cai et al. (CN 110263604). Regarding claim 3. Yoo and Hiskens teach: The electronic device of claim 2, Yoo and Hiskens fail to teach: wherein the second index is characterized by the quantifying of the colors sensed for the specific area of the original image, and wherein the instructions, when executed by at least one processor, individually and/or collectively, cause the electronic device to: uniformly divide the entire areas of the original image to determine the specific area and calculate the second index (Cai [PG 5 PAR 1] its purpose is dividing the image into multiple regions, pixels of the same such that semantic is divided in a uniform area.); classify each pixel within the original image into a class using semantic segmentation within the image (Cai [Pg 4 Par 14] In this step, the classification of the pixel level such as: 1) the level of semantic image segmentation (semantic No segmentation), finally to obtain the corresponding position of each pixel classification result. 2) edge detection, equivalent to each pixel as a second classification (edge or may not be edge).); determine pixels corresponding to objects of interest, based on classes according to the classifying (Cai [PG 5 Par 11] image mask and the like, for the selected image, graphic or object, the image processing totally or partially to shield, to control area or the processing process of the image processing.); and determine an area in which the pixels corresponding to the objects of interest are located as the specific area and calculate the second index (Cai [PG 5 Par 11] image mask and the like, for the selected image, graphic or object, the image processing totally or partially to shield, to control area or the processing process of the image processing.). Cai teaches: wherein the second index is characterized by the quantifying of the colors sensed for the specific area of the original image, and wherein the instructions, when executed by at least one processor, individually and/or collectively, cause the electronic device to: uniformly divide the entire areas of the original image to determine the specific area and calculate the second index (Cai [PG 5 PAR 1] its purpose is dividing the image into multiple regions, pixels of the same such that semantic is divided in a uniform area.); classify each pixel within the original image into a class using semantic segmentation within the image (Cai [Pg 4 Par 14] In this step, the classification of the pixel level such as: 1) the level of semantic image segmentation (semantic No segmentation), finally to obtain the corresponding position of each pixel classification result. 2) edge detection, equivalent to each pixel as a second classification (edge or may not be edge).); determine pixels corresponding to objects of interest, based on classes according to the classifying (Cai [PG 5 Par 11] image mask and the like, for the selected image, graphic or object, the image processing totally or partially to shield, to control area or the processing process of the image processing.); and determine an area in which the pixels corresponding to the objects of interest are located as the specific area and calculate the second index (Cai [PG 5 Par 11] image mask and the like, for the selected image, graphic or object, the image processing totally or partially to shield, to control area or the processing process of the image processing.). Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Yoo and Hiskens with Cai. Having a way to split an image up to adjust certain spots and to identify classes, as in Cai, would benefit the Yoo and Hiskens teachings by having a way to have more accurate colors. Additionally, this is the application of a known technique, having a way to split an image up to adjust certain spots and to identify classes, to yield predictable results. Regarding claim 12. Yoo and Hiskens teach: The electronic device of claim 11, wherein the second index quantifies the colorfulness of the specific area of the original image (Yoo [0001] For example, client devices can select colors corresponding to particular areas of a grayscale image, and image colorizing systems can generate a digital image reflecting the selected colors. [0082] The edge-guided colorization system 106 compares the training predicted color Ŷ 608 with the ground truth color Y 602 to adjust parameters of the edge-guided colorization neural network 606. The edge-guided colorization system 106 trains the edge-guided colorization neural network 606 parameters to map inputs training grayscale image X 604, training color points U 614, and canny edges V 612 as close to the ground truth color Y 602 as possible. In at least one embodiment, the edge-guided colorization system 106 utilizes a per-pixel custom character.sub.0 regression loss by comparing each pixel of the training predicted color Ŷ 608 with the corresponding pixel in the ground truth color Y 602. As mentioned previously, the edge-guided colorization system 106 can also utilize alternative loss functions.), Yoo and Hiskens fail to teach: and wherein the instructions, when executed by at least one processor, individually and/or collectively, cause the electronic device to equally divide the entire area of the original image, determine the specific area, and calculate the second index, or to classify a pixel in the image into any class using semantic segmentation in the original image, determine pixels corresponding to a specified interest object based on the classified class, determine an area, in which the pixels corresponding to the interest object are positioned, as the specific area, and calculate the second index (Cai [Pg 5 Par 1] its purpose is dividing the image into multiple regions, pixels of the same such that semantic is divided in a uniform area.). Cai teaches: and wherein the instructions, when executed by at least one processor, individually and/or collectively, cause the electronic device to equally divide the entire area of the original image, determine the specific area, and calculate the second index, or to classify a pixel in the image into any class using semantic segmentation in the original image, determine pixels corresponding to a specified interest object based on the classified class, determine an area, in which the pixels corresponding to the interest object are positioned, as the specific area, and calculate the second index (Cai [Pg 5 Par 1] its purpose is dividing the image into multiple regions, pixels of the same such that semantic is divided in a uniform area.). Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Yoo and Hiskens with Cai. Having a way to split an image up to adjust certain spots and to identify classes, as in Cai, would benefit the Yoo and Hiskens teachings by having a way to have more accurate colors. Additionally, this is the application of a known technique, having a way to split an image up to adjust certain spots and to identify classes, to yield predictable results. Regarding claim 13. Yoo, Hiskens, and Cai teach: The electronic device of claim 12, wherein the instructions, when executed by at least one processor, individually and/or collectively, cause the electronic device to increase the first value to decrease the probability that the original image is classified as the partially colorized image or increase the second value to increase the probability that the original image is classified as the partially colorized image (Yoo [0004] Thus, conventional image colorizing systems are often required to re-analyze pixels or regions and adjust the desired colorization.). Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Yoo and Hiskens with Cai. Having a way to split an image up to adjust certain spots and to identify classes, as in Cai, would benefit the Yoo and Hiskens teachings by having a way to have more accurate colors. Additionally, this is the application of a known technique, having a way to split an image up to adjust certain spots and to identify classes, to yield predictable results. Regarding claim 14. Yoo, Hiskens, and Cai teach: The electronic device of claim 12, wherein the instructions, when executed by at least one processor, individually and/or collectively, cause the electronic device to increase the third value and the fourth value to increase the influence of the second index during the process of classifying the original image as the partially colorized image, and wherein an increase in the influence of the second index indicates an increase in a degree of color richness in the specific area (Yoo [0004] Thus, conventional image colorizing systems are often required to re-analyze pixels or regions and adjust the desired colorization.). Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Yoo and Hiskens with Cai. Having a way to split an image up to adjust certain spots and to identify classes, as in Cai, would benefit the Yoo and Hiskens teachings by having a way to have more accurate colors. Additionally, this is the application of a known technique, having a way to split an image up to adjust certain spots and to identify classes, to yield predictable results. Regarding claim 15. Yoo, Hiskens, and Cai teach: The electronic device of claim 12, wherein the instructions, when executed by at least one processor, individually and/or collectively, cause the electronic device to determine that the original image is the partially colorized image based on the first index having a value equal to or greater than the first value and less than the second value larger than the first value, the second index being greater than the third value, the difference between the second index and the first index being greater than the fourth value, and a proportion of the pixel, in which an absolute value of a difference between red (R), green (G), and blue (B) channels being greater than a specified value in comparison with all the pixels that occupy the entire area of the original image, is smaller than the second value (Yoo [0001] For example, client devices can select colors corresponding to particular areas of a grayscale image, and image colorizing systems can generate a digital image reflecting the selected colors. [0082] The edge-guided colorization system 106 compares the training predicted color Ŷ 608 with the ground truth color Y 602 to adjust parameters of the edge-guided colorization neural network 606. The edge-guided colorization system 106 trains the edge-guided colorization neural network 606 parameters to map inputs training grayscale image X 604, training color points U 614, and canny edges V 612 as close to the ground truth color Y 602 as possible. In at least one embodiment, the edge-guided colorization system 106 utilizes a per-pixel custom character.sub.0 regression loss by comparing each pixel of the training predicted color Ŷ 608 with the corresponding pixel in the ground truth color Y 602. As mentioned previously, the edge-guided colorization system 106 can also utilize alternative loss functions.). Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Yoo and Hiskens with Cai. Having a way to split an image up to adjust certain spots and to identify classes, as in Cai, would benefit the Yoo and Hiskens teachings by having a way to have more accurate colors. Additionally, this is the application of a known technique, having a way to split an image up to adjust certain spots and to identify classes, to yield predictable results. Regarding claim 16. Yoo, Hiskens, and Cai teach: The electronic device of claim 12, wherein the instructions, when executed by at least one processor, individually and/or collectively, cause the electronic device to: divide the image into two areas based on a first indicator, display a first area based on the first indicator in an original image state of the partially colorized image, display a second area based on the first indicator in a state in which a tinted color image is synthesized with the original image of the partially colorized image and the entire area of the partially colorized image, and adjust sizes of the first and second areas based on an input to the first indicator (Hiskens [0017] The feature extraction algorithms may perform object detection and possibly also object recognition. If only object detection is performed, objects may be detected using background/foreground segmentation, for example, among other techniques.). Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Yoo and Hiskens with Cai. Having a way to split an image up to adjust certain spots and to identify classes, as in Cai, would benefit the Yoo and Hiskens teachings by having a way to have more accurate colors. Additionally, this is the application of a known technique, having a way to split an image up to adjust certain spots and to identify classes, to yield predictable results. Claim(s) 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yoo et al. (US 10997752) in view of Hiskens et al. (US 20230064450) and Matsuoka et al. (US 20230057757). Regarding claim 6. Yoo and Hiskens teach: The electronic device of claim 1, wherein the colorfulness comprises: a first index quantifying colorfulness for the entire areas of the original image (Yoo [0001] To illustrate, conventional image colorizing systems can automatically add color to complex black and white or grayscale images based on user selection of particular fill colors. For example, client devices can select colors corresponding to particular areas of a grayscale image, and image colorizing systems can generate a digital image reflecting the selected colors. [0014] In particular, the edge-guided colorization system can utilize a Canny edge detection algorithm to generate ground truth edges from a chrominance image.); Hiskens fails to teach: and a second index quantifying colorfulness for a specific area of the original image, and wherein the instructions, when executed by at least one processor, individually and/or collectively, cause the electronic device to determine that the original image is the partially colored image based on the first index having a value greater than or equal to a first value and less than a second value greater than the first value, the second index is greater than a third value and has a difference from the first index by more than a fourth value, and a ratio of pixels having an absolute value of a difference between red (R), green (G), and blue (B) channels greater than a specified value is less than the second value through comparison with all pixels in the entire areas of the original image (Yoo [0001] For example, client devices can select colors corresponding to particular areas of a grayscale image, and image colorizing systems can generate a digital image reflecting the selected colors. [0082] The edge-guided colorization system 106 compares the training predicted color Ŷ 608 with the ground truth color Y 602 to adjust parameters of the edge-guided colorization neural network 606. The edge-guided colorization system 106 trains the edge-guided colorization neural network 606 parameters to map inputs training grayscale image X 604, training color points U 614, and canny edges V 612 as close to the ground truth color Y 602 as possible. In at least one embodiment, the edge-guided colorization system 106 utilizes a per-pixel custom character.sub.0 regression loss by comparing each pixel of the training predicted color Ŷ 608 with the corresponding pixel in the ground truth color Y 602. As mentioned previously, the edge-guided colorization system 106 can also utilize alternative loss functions.) (Matsuoka [0114] If a region is within the constant range from the straight line connecting the vertex K and the vertex W, the absolute value of the difference between the R value and the G value, the absolute value of the difference between the G value and the B value, and the absolute value of the difference between the B value and the R value in pixel values of a target pixel are all within a constant value (threshold value). The threshold value may be set by the user, may be set according to the manual document mode or the automatic document discrimination result, or may be set in advance.). Yoo and Matsuoka teach: and a second index quantifying colorfulness for a specific area of the original image, and wherein the instructions, when executed by at least one processor, individually and/or collectively, cause the electronic device to determine that the original image is the partially colored image based on the first index having a value greater than or equal to a first value and less than a second value greater than the first value, the second index is greater than a third value and has a difference from the first index by more than a fourth value, and a ratio of pixels having an absolute value of a difference between red (R), green (G), and blue (B) channels greater than a specified value is less than the second value through comparison with all pixels in the entire areas of the original image (Yoo [0001] For example, client devices can select colors corresponding to particular areas of a grayscale image, and image colorizing systems can generate a digital image reflecting the selected colors. [0082] The edge-guided colorization system 106 compares the training predicted color Ŷ 608 with the ground truth color Y 602 to adjust parameters of the edge-guided colorization neural network 606. The edge-guided colorization system 106 trains the edge-guided colorization neural network 606 parameters to map inputs training grayscale image X 604, training color points U 614, and canny edges V 612 as close to the ground truth color Y 602 as possible. In at least one embodiment, the edge-guided colorization system 106 utilizes a per-pixel custom character.sub.0 regression loss by comparing each pixel of the training predicted color Ŷ 608 with the corresponding pixel in the ground truth color Y 602. As mentioned previously, the edge-guided colorization system 106 can also utilize alternative loss functions.) (Matsuoka [0114] If a region is within the constant range from the straight line connecting the vertex K and the vertex W, the absolute value of the difference between the R value and the G value, the absolute value of the difference between the G value and the B value, and the absolute value of the difference between the B value and the R value in pixel values of a target pixel are all within a constant value (threshold value). The threshold value may be set by the user, may be set according to the manual document mode or the automatic document discrimination result, or may be set in advance.). Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Yoo and Hiskens with Matsuoka. Having a way get absolute values, as in Matsuoka, would benefit the Yoo and Hiskens teachings by having a way to have more accurate colors. Additionally, this is the application of a known technique, having a way get absolute values, to yield predictable results. Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yoo et al. (US 10997752) in view of Hiskens et al. (US 20230064450) and Nanda et al. (US 20190362478). Regarding claim 7. Yoo and Hiskens teach: The electronic device of claim 1, Yoo and Hiskens fail to teach: wherein the instructions, when executed by at least one processor, individually and/or collectively, cause the electronic device to generate a color image in which a second area is colored using a color histogram predictor, a histogram encoder, and/or a color generator (Nanda [0026] Each histogram 221-223 is a frequency distribution of each pixel value within the video across all the possible pixel values. These histogram provides the predictive model 120 information to compare the overall exposure and color distribution of the reference video with the target video. For example, if the reference video 103 is bright, then the predictive model 120 can propagate a correct level of brightness adjustment to target video 104. Similarly, if reference video 103 has a particular tint, then predictive model 121 can propagate a correct level of tint adjustment to target video 104.) (Yoo [0012] One or more embodiments of the present disclosure include an edge-guided colorization system that utilizes a deep colorization neural network to efficiently and accurately generate colorized images using user-guided color edges.). Nanda teaches: wherein the instructions, when executed by at least one processor, individually and/or collectively, cause the electronic device to generate a color image in which a second area is colored using a color histogram predictor, a histogram encoder, and/or a color generator (Nanda [0026] Each histogram 221-223 is a frequency distribution of each pixel value within the video across all the possible pixel values. These histogram provides the predictive model 120 information to compare the overall exposure and color distribution of the reference video with the target video. For example, if the reference video 103 is bright, then the predictive model 120 can propagate a correct level of brightness adjustment to target video 104. Similarly, if reference video 103 has a particular tint, then predictive model 121 can propagate a correct level of tint adjustment to target video 104.) (Yoo [0012] One or more embodiments of the present disclosure include an edge-guided colorization system that utilizes a deep colorization neural network to efficiently and accurately generate colorized images using user-guided color edges.). Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Yoo and Hiskens with Nanda. Having a color histogram predictor, as in Nanda, would benefit the Yoo and Hiskens teachings by having a way to have more accurate colors predicted. Additionally, this is the application of a known technique, having a color histogram predictor, to yield predictable results. Claim(s) 8-9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yoo et al. (US 10997752) in view of Hiskens et al. (US 20230064450) and Remedios et al. (US 7961938). Regarding claim 8. Yoo and Hiskens teach: The electronic device of claim 1, wherein the instructions, when executed by at least one processor, individually and/or collectively, cause the electronic device to: based on the input original image being the partially colored image, determine a first area emphasized by a specific color and classified as a colored area, and a second area classified as having a black-and-white background or a monochrome background (Hiskens [0016] The partially blended color images 132 may include mostly grayscale image data (in particular, background scenery based on data from the infrared camera 120), while features of interest may be shown colorized (based on data from the RGB camera 122) to provide the user with more information about conditions in front of the cameras.); calculate, within a specified radius with reference to one pixel located in the second area, a number of all pixels and a number of pixels each emphasized by a specific color and classified as a colored pixel (Hiskens [0008] For each matching pair, a respective region of an output image is colorized by setting colors of pixels in the region based on colors of the pixels of the second object in the matching pair. The pixels in the region may have locations corresponding to the locations of the pixels in the first object of the matching pair. When the colorizing is complete, pixels not in the colorized regions have intensities of the infrared image. The output image is a version of the infrared image with regions colorized according to the color image.); calculate a ratio of pixels classified as colored pixels to all pixels (Remedios [0040] At 331, the ratio of the number of pixels of the search color to the total number of pixels in the image is determined.); determine colorfulness of the one pixel, based on an absolute value of a difference between red (R), green (G), and blue (B) values within one pixel (Yoo [0001] For example, client devices can select colors corresponding to particular areas of a grayscale image, and image colorizing systems can generate a digital image reflecting the selected colors. [0082] The edge-guided colorization system 106 compares the training predicted color Ŷ 608 with the ground truth color Y 602 to adjust parameters of the edge-guided colorization neural network 606. The edge-guided colorization system 106 trains the edge-guided colorization neural network 606 parameters to map inputs training grayscale image X 604, training color points U 614, and canny edges V 612 as close to the ground truth color Y 602 as possible. In at least one embodiment, the edge-guided colorization system 106 utilizes a per-pixel custom character.sub.0 regression loss by comparing each pixel of the training predicted color Ŷ 608 with the corresponding pixel in the ground truth color Y 602. As mentioned previously, the edge-guided colorization system 106 can also utilize alternative loss functions.) (Matsuoka [0114] If a region is within the constant range from the straight line connecting the vertex K and the vertex W, the absolute value of the difference between the R value and the G value, the absolute value of the difference between the G value and the B value, and the absolute value of the difference between the B value and the R value in pixel values of a target pixel are all within a constant value (threshold value). The threshold value may be set by the user, may be set according to the manual document mode or the automatic document discrimination result, or may be set in advance.); and determine a weight corresponding to the original image, based on the ratio of the pixels classified as the colored pixels to all pixels and the colorfulness of the one pixel (Yoo [0051] At each layer, learned weighting parameters can emphasize different features that are significant to generate an accurate prediction output. In some embodiments, the edge-guided colorization system 106 concatenates the modified edge map 324 into all (or a subset of) feature maps analyzed at intermediate layers of the edge-guided colorization neural network 314.). Yoo and Hiskens fail to teach: calculate a ratio of pixels classified as colored pixels to all pixels (Remedios [0040] At 331, the ratio of the number of pixels of the search color to the total number of pixels in the image is determined.); Remedios teaches: calculate a ratio of pixels classified as colored pixels to all pixels (Remedios [0040] At 331, the ratio of the number of pixels of the search color to the total number of pixels in the image is determined.); Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Yoo and Hiskens with Remedios. Counting a ratio of pixels, as in Remedios, would benefit the Yoo and Hiskens teachings by having a way to see how much is colored. Additionally, this is the application of a known technique, counting a ratio of pixels, to yield predictable results. Regarding claim 9. Yoo, Hiskens, and Remedios teach: The electronic device of claim 8, wherein the instructions, when executed by at least one processor, individually and/or collectively, cause the electronic device to: determine a weight of the color image to make a sum of the weight corresponding to the original image and the weight corresponding to the color image be 1 (Hiskens [0019] In one embodiment each potential match may be scored as a weighted combination of matching attributes.); and synthesize the input original image and the color image, based on the weight corresponding to the original image and the weight of the color image (Hiskens [0012] FIG. 2 shows a system and dataflow for generating partially blended video in accordance with one or more embodiments of the disclosure.). Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Yoo and Hiskens with Remedios. Counting a ratio of pixels, as in Remedios, would benefit the Yoo and Hiskens teachings by having a way to see how much is colored. Additionally, this is the application of a known technique, counting a ratio of pixels, to yield predictable results. Claim(s) 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yoo et al. (US 10997752) in view of Hiskens et al. (US 20230064450) and Sunkavalli et al. (US 20160140722). Regarding claim 10. Yoo and Hiskens teach: The electronic device of claim 2, Yoo and Hiskens fail to teach: wherein the instructions, when executed by at least one processor, individually and/or collectively, cause the electronic device to determine that the original image is not the partially colored image based on the first index having a value equal to or less than half of the second value, a value of an R channel being greater than or equal to a value of a G channel in all pixels of the original image, and the value of the G channel being greater than or equal to a value of a B channel in all pixels of the original image (Sunkavalli [0050] Alternately, if fast intrinsic images application 104 determines that the shading of image 202 is colored instead of grayscale, then fast intrinsic images application 104 requests user input in order to generate proxy image 310.). Sunkavalli teaches: wherein the instructions, when executed by at least one processor, individually and/or collectively, cause the electronic device to determine that the original image is not the partially colored image based on the first index having a value equal to or less than half of the second value, a value of an R channel being greater than or equal to a value of a G channel in all pixels of the original image, and the value of the G channel being greater than or equal to a value of a B channel in all pixels of the original image (Sunkavalli [0050] Alternately, if fast intrinsic images application 104 determines that the shading of image 202 is colored instead of grayscale, then fast intrinsic images application 104 requests user input in order to generate proxy image 310.). Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Yoo and Hiskens with Sunkavalli. Determining if something is not partially colored, as in Sunkavalli, would benefit the Yoo and Hiskens teachings by having a way to see how much is colored. Additionally, this is the application of a known technique, determining if something is not partially colored, to yield predictable results. Claim(s) 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yoo et al. (US 10997752) in view of Hiskens et al. (US 20230064450), Cai et al. (CN 110263604), and Matsuoka et al. (US 20230057757). Regarding claim 17. Yoo, Hiskens, and Cai teach: The electronic device of claim 16, wherein the instructions, when executed by at least one processor, individually and/or collectively, cause the electronic device to: display a second indicator, and adjust a parameter value for color synthesis based on an input to the second indicator (Yoo [0001] For example, client devices can select colors corresponding to particular areas of a grayscale image, and image colorizing systems can generate a digital image reflecting the selected colors. [0082] The edge-guided colorization system 106 compares the training predicted color Ŷ 608 with the ground truth color Y 602 to adjust parameters of the edge-guided colorization neural network 606. The edge-guided colorization system 106 trains the edge-guided colorization neural network 606 parameters to map inputs training grayscale image X 604, training color points U 614, and canny edges V 612 as close to the ground truth color Y 602 as possible. In at least one embodiment, the edge-guided colorization system 106 utilizes a per-pixel custom character.sub.0 regression loss by comparing each pixel of the training predicted color Ŷ 608 with the corresponding pixel in the ground truth color Y 602. As mentioned previously, the edge-guided colorization system 106 can also utilize alternative loss functions.), a second parameter (R) for determining a radius of an area that is a sample in the image (Hiskens [0008] For each matching pair, a respective region of an output image is colorized by setting colors of pixels in the region based on colors of the pixels of the second object in the matching pair. The pixels in the region may have locations corresponding to the locations of the pixels in the first object of the matching pair. When the colorizing is complete, pixels not in the colorized regions have intensities of the infrared image. The output image is a version of the infrared image with regions colorized according to the color image.); Yoo, Hiskens, and Cai fail to teach: wherein parameters related to the color synthesis comprise: a first parameter (T) corresponding to a criterion related to a magnitude of an absolute value of a difference between the R, G, and B channels (Matsuoka [0114] If a region is within the constant range from the straight line connecting the vertex K and the vertex W, the absolute value of the difference between the R value and the G value, the absolute value of the difference between the G value and the B value, and the absolute value of the difference between the B value and the R value in pixel values of a target pixel are all within a constant value (threshold value). The threshold value may be set by the user, may be set according to the manual document mode or the automatic document discrimination result, or may be set in advance.); and a third parameter (M) indicating a maximum value of a colorfulness value (Matsuoka [0152] First, the region divider 2112 calculates, from the pixel values of the pixels included in a target region, a maximum value and a minimum value on each axis of the color space (step S120), for each region into which the pixels are classified in the initial classification process.), and wherein the type of parameter related to the color synthesis displayed on the second indicator varies depending on configurations (Matsuoka [0038] The operation panel 50 inputs an operation to the image forming apparatus 1 and displays various types of information. An operation acceptor 52 includes an input device such as a setting button and a numeric keypad for inputting an operation mode and various types of settings of the image forming apparatus 1. A display 54 includes a display device such as an LCD (Liquid crystal display), an organic EL (electro-luminescence) display, and a micro LED display. The operation panel 50 may be a touch panel in which the operation acceptor 52 and the display 54 are integrally formed. In this case, a method of detecting an input to the touch panel may be a common detection method such as a resistive method, an infrared method, an electromagnetic induction method, and a capacitive method.). Matsuoka teaches: wherein parameters related to the color synthesis comprise: a first parameter (T) corresponding to a criterion related to a magnitude of an absolute value of a difference between the R, G, and B channels (Matsuoka [0114] If a region is within the constant range from the straight line connecting the vertex K and the vertex W, the absolute value of the difference between the R value and the G value, the absolute value of the difference between the G value and the B value, and the absolute value of the difference between the B value and the R value in pixel values of a target pixel are all within a constant value (threshold value). The threshold value may be set by the user, may be set according to the manual document mode or the automatic document discrimination result, or may be set in advance.); and a third parameter (M) indicating a maximum value of a colorfulness value (Matsuoka [0152] First, the region divider 2112 calculates, from the pixel values of the pixels included in a target region, a maximum value and a minimum value on each axis of the color space (step S120), for each region into which the pixels are classified in the initial classification process.), and wherein the type of parameter related to the color synthesis displayed on the second indicator varies depending on configurations (Matsuoka [0038] The operation panel 50 inputs an operation to the image forming apparatus 1 and displays various types of information. An operation acceptor 52 includes an input device such as a setting button and a numeric keypad for inputting an operation mode and various types of settings of the image forming apparatus 1. A display 54 includes a display device such as an LCD (Liquid crystal display), an organic EL (electro-luminescence) display, and a micro LED display. The operation panel 50 may be a touch panel in which the operation acceptor 52 and the display 54 are integrally formed. In this case, a method of detecting an input to the touch panel may be a common detection method such as a resistive method, an infrared method, an electromagnetic induction method, and a capacitive method.). Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Yoo, Hiskens, and Cai with Matsuoka. Indicating parameters and checking on color values, as in Matsuoka, would benefit the Yoo, Hiskens, and Cai teachings by having a way to see have more checks depending on the color. Additionally, this is the application of a known technique, indicating parameters and checking on color values, to yield predictable results. Claim(s) 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yoo et al. (US 10997752) in view of Hiskens et al. (US 20230064450), Cai et al. (CN 110263604), Matsuoka et al. (US 20230057757), and Wang et al. (US 20190158796). Regarding claim 18. Yoo, Hiskens, Cai, and Matsuoka teach: The electronic device of claim 17, Yoo, Hiskens, Cai, and Matsuoka fail to teach: wherein the instructions, when executed by at least one processor, individually and/or collectively, cause the electronic device to differently determine weights for respective parameters based on the input to the second indicator, and wherein the weight has a value of 0 or more and 1 or less (Wang [0223] In some embodiments, the second region weight may be 1. In some embodiments, the third region weight may be 0. In some embodiments, the second region weight or the third region weight may be any value no less than 0 and no more than 1. In some embodiments, the second region weight may be greater than the third region weight.) Wang teaches: wherein the instructions, when executed by at least one processor, individually and/or collectively, cause the electronic device to differently determine weights for respective parameters based on the input to the second indicator, and wherein the weight has a value of 0 or more and 1 or less (Wang [0223] In some embodiments, the second region weight may be 1. In some embodiments, the third region weight may be 0. In some embodiments, the second region weight or the third region weight may be any value no less than 0 and no more than 1. In some embodiments, the second region weight may be greater than the third region weight.) Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Yoo, Hiskens, Cai, and Matsuoka with Wang. Making sure weight is less than 1 and more than 0, as in Wang, would benefit the Yoo, Hiskens, Cai, and Matsuoka teachings by having a way to have a set range of weight. Additionally, this is the application of a known technique, making sure weight is less than 1 and more than 0, to yield predictable results. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to DENIS VASILIY MINKO whose telephone number is (571)270-5226. The examiner can normally be reached Monday-Thursday 8:30-6:00 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Said Broome can be reached at 571-272-2931. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DENIS VASILIY MINKO/Examiner, Art Unit 2612 /Said Broome/Supervisory Patent Examiner, Art Unit 2612
Read full office action

Prosecution Timeline

May 23, 2024
Application Filed
Feb 24, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597195
METHOD FOR GENERATING PHOTOGRAPHED IMAGE DATA USING VIRTUAL ORGANOID
2y 5m to grant Granted Apr 07, 2026
Patent 12579732
Face-Oriented Geometry Streaming
2y 5m to grant Granted Mar 17, 2026
Patent 12518497
MODEL ALIGNMENT METHOD
2y 5m to grant Granted Jan 06, 2026
Patent 12518641
SYSTEMS AND METHODS FOR GENERATING AVIONIC DISPLAYS INDICATING WAKE TURBULENCE
2y 5m to grant Granted Jan 06, 2026
Patent 12462476
METHOD, ELECTRONIC DEVICE, AND COMPUTER PROGRAM PRODUCT FOR GENERATING THREE-DIMENSIONAL MODEL
2y 5m to grant Granted Nov 04, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
62%
Grant Probability
79%
With Interview (+16.7%)
2y 5m
Median Time to Grant
Low
PTA Risk
Based on 16 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month