Prosecution Insights
Last updated: April 19, 2026
Application No. 17/144,734

METHOD AND APPARATUS FOR ASSIGNING COLOURS TO AN IMAGE

Non-Final OA §102§103§DP
Filed
Jan 08, 2021
Examiner
ZAK, JACQUELINE ROSE
Art Unit
2666
Tech Center
2600 — Communications
Assignee
Curvebeam AI Limited
OA Round
8 (Non-Final)
67%
Grant Probability
Favorable
8-9
OA Rounds
2y 10m
To Grant
55%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
8 granted / 12 resolved
+4.7% vs TC avg
Minimal -11% lift
Without
With
+-11.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
46 currently pending
Career history
58
Total Applications
across all art units

Statute-Specific Performance

§101
5.7%
-34.3% vs TC avg
§103
56.3%
+16.3% vs TC avg
§102
21.1%
-18.9% vs TC avg
§112
13.8%
-26.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 12 resolved cases

Office Action

§102 §103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 02/05/2026 has been entered. Claim Status Claims 1, 3-5, 7-11, 22, and 24-37 are pending for examination in the application filed 02/05/2026. Claims 1, 5, 22, 28, 31, and 33 are currently amended. Response to Arguments and Amendments The nonstatutory double patenting rejection is withdrawn in view of the approved Terminal Disclaimer filed 02/05/2026. The 35 U.S.C. 101 rejection is withdrawn in view of the newly added amendments. Applicant’s arguments have been considered but are moot because the new ground of rejection does not rely on the references applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1, 8-9, 22, 26, and 30-37 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Bhoyar (Bhoyar, Kishor, and Omprakash Kakde. "Colour image segmentation using fast fuzzy c-means algorithm." ELCVIA: electronic letters on computer vision and image analysis (2010): 18-31). Regarding claim 1, Bhoyar teaches a computer-implemented image processing method comprising ([Abstract] This paper proposes modified FCM (Fuzzy C-Means) approach to colour image segmentation using JND (Just Noticeable Difference) histogram): receiving an image comprising pixels or voxels; identifying the pixels or voxels of the image in a 3D colour referential map (RGB), wherein the 3D colour referential map is a preselected 3D colour scheme ([2.3 Computing JND Histogram] In this section, we propose an algorithm for computing histogram of a colour image in RGB space…Thus Table 1 contains the R, G, B coordinates and the respective frequency information or population (H) of the tri-colour stimulus, while Table 2 contains the x and y positional co-ordinates in the image and respective colour index (row index) in Table 1. The number of rows in Table 1 is equal to the number of different colour shades available in the image. In Table 1 there will be one entry for each colour shade while in Table 2 there will be one entry for each pixel); identifying in the 3D colour referential map a cluster of the pixels or voxels with similar visual appearance by pixel or voxel value, wherein the pixel or voxel value is an intensity or shade value ([3 Histogram Agglomeration] Agglomeration in chemical processes attributes to formation of bigger lumps from smaller particles. In the digital image segmentation, the similar pixels (in some sense) are clustered together under some similarity criteria… ii)Starting from the first colour in Table 1, compare the colour with the next colour in table 1); assigning a colour to each pixel or voxel in the cluster of pixels or voxels; within the image or a part thereof, modifying each said pixel or voxel in the cluster of pixels or voxels to be of said colour if each pixel or voxel in the cluster of pixels or voxels is not of said colour ([3 Histogram Agglomeration] After the compressed histogram of a real life image is obtained using the basic JND histogram algorithm given in section 2, the agglomeration or region merging technique can further be used to reduce the number of colours by combining the smaller segments (less than 0.1% of the image size) with similar coloured larger segments), thereby highlighting the pixels or voxels in the cluster (see Fig. 1-6, agglomeration between adjacent pixels can make details more or less visible locally); outputting the image as a modified image that includes the modified pixels or voxels (see Fig. 1-6); and wherein the method includes adjusting, under user control, a minimum perceptible pixel or voxel value difference, being a colour difference between a first pixel or voxel and a second pixel or voxel that is furthest away in colour from the first pixel or voxel beyond which the user can clearly perceive a difference in colour from the first pixel ([Algorithm 1: JND histogram computing algorithm] i) Select a proper similarity threshold (histogram binning threshold), Θ1 (JNDeye2 ≤ Θ1≤ JNDh2), depending on the precision of vision from fine to broad as required by the application. [2.2 Approximating the value of colour similarity threshold (JND)] Let C1 and C2 be two RGB colours in the new quantized space. Let C1= (Jr1,Jg1,Jb1) =(0,0,0) and its immediate JND neighbour, that is 1 noticeable difference away is C2= (Jr2,Jg2,Jb2) =(255/24,255/28,255/26). Hence JNDeye= sqrt ((255/24)^2 + (255/28)^2 + (255/26)^2)) = sqrt (285.27). Using equation (1) the squared JND threshold of human perception is given by equation 2: Θ = JNDh2 = 2567). Regarding claim 8, Bhoyar teaches the method of claim 1. Bhoyar further teaches automatically identifying another cluster in the 3D colour referential map of pixels or voxels with similar visual appearance by pixel or voxel value within the image or part thereof ([3 Histogram Agglomeration] Agglomeration in chemical processes attributes to formation of bigger lumps from smaller particles. In the digital image segmentation, the similar pixels (in some sense) are clustered together under some similarity criteria…ii) Starting from the first colour in Table 1, compare the colour with the next colour in table 1. iii) If the population of the smaller segment is smaller than .1% and the two segments are similar using θ2 , merge the ith colour with the previous one (the first in Table 1), their populations will be added and the colour of larger population will represent the merger. iv) The merged entry will be removed from Table 1. This reduces number of rows in Table 1. In Table 2 the colour index to be merged is changed by the index to which it is merged. v) Thus the first colour in the Table 1 will be compared with every remaining colour in Table 1 and step ii is repeated if required. vi) Step ii, iii and iv are repeated for every colour in the Table 1. vii) Steps ii to v are repeated till the Table 1 does not reduce further i.e. equilibrium has reached. viii) Table 2 is sorted in ascending order of the colour index). Regarding claim 9, Bhoyar teaches the method of claim 1. Bhoyar further teaches assigning different colours to the identified cluster of pixels or voxels ([3 Histogram Agglomeration] Agglomeration in chemical processes attributes to formation of bigger lumps from smaller particles. In the digital image segmentation, the similar pixels (in some sense) are clustered together under some similarity criteria…ii) Starting from the first colour in Table 1, compare the colour with the next colour in table 1. iii) If the population of the smaller segment is smaller than .1% and the two segments are similar using θ2 , merge the ith colour with the previous one (the first in Table 1), their populations will be added and the colour of larger population will represent the merger. iv) The merged entry will be removed from Table 1. This reduces number of rows in Table 1. In Table 2 the colour index to be merged is changed by the index to which it is merged. v) Thus the first colour in the Table 1 will be compared with every remaining colour in Table 1 and step ii is repeated if required. vi) Step ii, iii and iv are repeated for every colour in the Table 1. vii) Steps ii to v are repeated till the Table 1 does not reduce further i.e. equilibrium has reached. viii) Table 2 is sorted in ascending order of the colour index. [3 Histogram Agglomeration] After the compressed histogram of a real life image is obtained using the basic JND histogram algorithm given in section 2, the agglomeration or region merging technique can further be used to reduce the number of colours by combining the smaller segments (less than 0.1% of the image size) with similar coloured larger segments). Regarding claim 22, Bhoyar teaches an image processing system, the system comprising a processor configured to (Table 3: Average Performance on BSD (*On AMD Athlon 1.61 GHz processor, 1GB RAM and MATLAB 7 running on Windows XP)): receive an image comprising pixels or voxels; identify the pixels or voxels of the image in a 3D colour referential map (RGB), wherein the 3D colour referential map is a preselected 3D colour scheme ([2.3 Computing JND Histogram] In this section, we propose an algorithm for computing histogram of a colour image in RGB space…Thus Table 1 contains the R, G, B coordinates and the respective frequency information or population (H) of the tri-colour stimulus, while Table 2 contains the x and y positional co-ordinates in the image and respective colour index (row index) in Table 1. The number of rows in Table 1 is equal to the number of different colour shades available in the image. In Table 1 there will be one entry for each colour shade while in Table 2 there will be one entry for each pixel); identify in the 3D colour referential map a cluster of the pixels or voxels with similar visual appearance by pixel or voxel value, wherein the pixel or voxel value is an intensity or shade value ([3 Histogram Agglomeration] Agglomeration in chemical processes attributes to formation of bigger lumps from smaller particles. In the digital image segmentation, the similar pixels (in some sense) are clustered together under some similarity criteria…ii) Starting from the first colour in Table 1, compare the colour with the next colour in table 1); assign a colour to each pixel or voxel in the cluster of pixels or voxels; within the image or a part thereof, modify each said pixel or voxel in the cluster of pixels or voxels to be of said colour if each pixel or voxel in the cluster of pixels or voxels is not of said colour ([3 Histogram Agglomeration] After the compressed histogram of a real life image is obtained using the basic JND histogram algorithm given in section 2, the agglomeration or region merging technique can further be used to reduce the number of colours by combining the smaller segments (less than 0.1% of the image size) with similar coloured larger segments), thereby highlighting the pixels or voxels in the cluster (see Fig. 1-6, agglomeration between adjacent pixels can make details more or less visible locally); output the image as a modified image that includes the modified pixels or voxels (see Fig. 1-6); and wherein the system enables user adjustment of a minimum perceptible pixel or voxel value difference, being a colour difference between a first pixel or voxel and a second pixel or voxel that is furthest away in colour from the first pixel or voxel beyond which the user can clearly perceive a difference in colour from the first pixel ([Algorithm 1: JND histogram computing algorithm] i) Select a proper similarity threshold (histogram binning threshold), Θ1 (JNDeye2 ≤ Θ1≤ JNDh2), depending on the precision of vision from fine to broad as required by the application. [2.2 Approximating the value of colour similarity threshold (JND)] Let C1 and C2 be two RGB colours in the new quantized space. Let C1= (Jr1,Jg1,Jb1) =(0,0,0) and its immediate JND neighbour, that is 1 noticeable difference away is C2= (Jr2,Jg2,Jb2) =(255/24,255/28,255/26). Hence JNDeye= sqrt ((255/24)^2 + (255/28)^2 + (255/26)^2)) = sqrt (285.27). Using equation (1) the squared JND threshold of human perception is given by equation 2: Θ = JNDh2 = 2567). Regarding claim 26, Bhoyar teaches the method of claim 1. Bhoyar further teaches a non-transitory computer readable medium storing a computer program that comprises instructions that, when executed on a computing device, control the device to perform the method (Table 3: Average Performance on BSD (*On AMD Athlon 1.61 GHz processor, 1GB RAM and MATLAB 7 running on Windows XP)). Regarding claim 30, Bhoyar teaches the system of claim 22. Bhoyar further teaches wherein the processor is further configured to assign different colours to the identified cluster of pixels or voxels ([3 Histogram Agglomeration] Agglomeration in chemical processes attributes to formation of bigger lumps from smaller particles. In the digital image segmentation, the similar pixels (in some sense) are clustered together under some similarity criteria…ii) Starting from the first colour in Table 1, compare the colour with the next colour in table 1. iii) If the population of the smaller segment is smaller than .1% and the two segments are similar using θ2 , merge the ith colour with the previous one (the first in Table 1), their populations will be added and the colour of larger population will represent the merger. iv) The merged entry will be removed from Table 1. This reduces number of rows in Table 1. In Table 2 the colour index to be merged is changed by the index to which it is merged. v) Thus the first colour in the Table 1 will be compared with every remaining colour in Table 1 and step ii is repeated if required. vi) Step ii, iii and iv are repeated for every colour in the Table 1. vii) Steps ii to v are repeated till the Table 1 does not reduce further i.e. equilibrium has reached. viii) Table 2 is sorted in ascending order of the colour index. [3 Histogram Agglomeration] After the compressed histogram of a real life image is obtained using the basic JND histogram algorithm given in section 2, the agglomeration or region merging technique can further be used to reduce the number of colours by combining the smaller segments (less than 0.1% of the image size) with similar coloured larger segments). Regarding claim 31, Bhoyar teaches a computer-implemented image processing method comprising ([Abstract] This paper proposes modified FCM (Fuzzy C-Means) approach to colour image segmentation using JND (Just Noticeable Difference) histogram): receiving an image comprising pixels or voxels; identifying the pixels or voxels of the image in a 3D colour referential map (RGB), wherein the 3D colour referential map is a preselected 3D colour scheme ([2.3 Computing JND Histogram] In this section, we propose an algorithm for computing histogram of a colour image in RGB space…Thus Table 1 contains the R, G, B coordinates and the respective frequency information or population (H) of the tri-colour stimulus, while Table 2 contains the x and y positional co-ordinates in the image and respective colour index (row index) in Table 1. The number of rows in Table 1 is equal to the number of different colour shades available in the image. In Table 1 there will be one entry for each colour shade while in Table 2 there will be one entry for each pixel); identifying in the 3D colour referential map a cluster of the pixels or voxels with similar visual appearance by pixel or voxel value, wherein the pixel or voxel value is an intensity or shade value ([3 Histogram Agglomeration] Agglomeration in chemical processes attributes to formation of bigger lumps from smaller particles. In the digital image segmentation, the similar pixels (in some sense) are clustered together under some similarity criteria…ii) Starting from the first colour in Table 1, compare the colour with the next colour in table 1); assigning a colour to each pixel or voxel in the cluster of pixels or voxels; within the image or a part thereof, modifying each said pixel or voxel in the cluster of pixels or voxels to be of said colour if each pixel or voxel in the cluster of pixels or voxels is not of said colour ([3 Histogram Agglomeration] After the compressed histogram of a real life image is obtained using the basic JND histogram algorithm given in section 2, the agglomeration or region merging technique can further be used to reduce the number of colours by combining the smaller segments (less than 0.1% of the image size) with similar coloured larger segments), thereby obscuring or masking the pixels or voxels in the cluster (see Fig. 1-6, agglomeration between adjacent pixels can make details more or less visible locally); outputting the image as a modified image that includes the modified pixels or voxels (see Fig. 1-6); and wherein the method includes adjusting, under user control, a minimum perceptible pixel or voxel value difference, being a colour difference between a first pixel or voxel and a second pixel or voxel that is furthest away in colour from the first pixel or voxel beyond which the user can clearly perceive a difference in colour from the first pixel ([Algorithm 1: JND histogram computing algorithm] i) Select a proper similarity threshold (histogram binning threshold), Θ1 (JNDeye2 ≤ Θ1≤ JNDh2), depending on the precision of vision from fine to broad as required by the application. [2.2 Approximating the value of colour similarity threshold (JND)] Let C1 and C2 be two RGB colours in the new quantized space. Let C1= (Jr1,Jg1,Jb1) =(0,0,0) and its immediate JND neighbour, that is 1 noticeable difference away is C2= (Jr2,Jg2,Jb2) =(255/24,255/28,255/26). Hence JNDeye= sqrt ((255/24)^2 + (255/28)^2 + (255/26)^2)) = sqrt (285.27). Using equation (1) the squared JND threshold of human perception is given by equation 2: Θ = JNDh2 = 2567). Regarding claim 32, Bhoyar teaches the method of claim 31. Bhoyar further teaches a non-transitory computer readable medium storing a computer program that comprises instructions that, when executed on a computing device, control the device to perform the method (Table 3: Average Performance on BSD (*On AMD Athlon 1.61 GHz processor, 1GB RAM and MATLAB 7 running on Windows XP)). Regarding claim 33, Bhoyar teaches an image processing system, the system comprising a processor configured to (Table 3: Average Performance on BSD (*On AMD Athlon 1.61 GHz processor, 1GB RAM and MATLAB 7 running on Windows XP)): receive an image comprising pixels or voxels; identify the pixels or voxels of the image in a 3D colour referential map (RGB), wherein the 3D colour referential map is a preselected 3D colour scheme ([2.3 Computing JND Histogram] In this section, we propose an algorithm for computing histogram of a colour image in RGB space…Thus Table 1 contains the R, G, B coordinates and the respective frequency information or population (H) of the tri-colour stimulus, while Table 2 contains the x and y positional co-ordinates in the image and respective colour index (row index) in Table 1. The number of rows in Table 1 is equal to the number of different colour shades available in the image. In Table 1 there will be one entry for each colour shade while in Table 2 there will be one entry for each pixel); identify in the 3D colour referential map a cluster of the pixels or voxels with similar visual appearance by pixel or voxel value, wherein the pixel or voxel value is an intensity or shade value ([3 Histogram Agglomeration] Agglomeration in chemical processes attributes to formation of bigger lumps from smaller particles. In the digital image segmentation, the similar pixels (in some sense) are clustered together under some similarity criteria…ii) Starting from the first colour in Table 1, compare the colour with the next colour in table 1); assign a colour to each pixel or voxel in the cluster of pixels or voxels; within the image or a part thereof, modify each said pixel or voxel in the cluster of pixels or voxels to be of said colour if each pixel or voxel in the cluster of pixels or voxels is not of said colour ([3 Histogram Agglomeration] After the compressed histogram of a real life image is obtained using the basic JND histogram algorithm given in section 2, the agglomeration or region merging technique can further be used to reduce the number of colours by combining the smaller segments (less than 0.1% of the image size) with similar coloured larger segments), thereby obscuring or masking the pixels or voxels in the cluster (see Fig. 1-6, agglomeration between adjacent pixels can make details more or less visible locally); output the image as a modified image that includes the modified pixels or voxels (see Fig. 1-6); and wherein the system enables user adjustment of a minimum perceptible pixel or voxel value difference, being a colour difference between a first pixel or voxel and a second pixel or voxel that is furthest away in colour from the first pixel or voxel beyond which the user can clearly perceive a difference in colour from the first pixel ([Algorithm 1: JND histogram computing algorithm] i) Select a proper similarity threshold (histogram binning threshold), Θ1 (JNDeye2 ≤ Θ1≤ JNDh2), depending on the precision of vision from fine to broad as required by the application. [2.2 Approximating the value of colour similarity threshold (JND)] Let C1 and C2 be two RGB colours in the new quantized space. Let C1= (Jr1,Jg1,Jb1) =(0,0,0) and its immediate JND neighbour, that is 1 noticeable difference away is C2= (Jr2,Jg2,Jb2) =(255/24,255/28,255/26). Hence JNDeye= sqrt ((255/24)^2 + (255/28)^2 + (255/26)^2)) = sqrt (285.27). Using equation (1) the squared JND threshold of human perception is given by equation 2: Θ = JNDh2 = 2567). Regarding claim 34, Bhoyar teaches the method of claim 1. Bhoyar further teaches wherein the 3D colour scheme is RGB (Red, Green, Blue), HLS (Hue, Lightness, Saturation) or HVS (Hue, Value, Saturation) ([2.3 Computing JND Histogram] In this section, we propose an algorithm for computing histogram of a colour image in RGB space…Thus Table 1 contains the R, G, B coordinates and the respective frequency information or population (H) of the tri-colour stimulus, while Table 2 contains the x and y positional co-ordinates in the image and respective colour index (row index) in Table 1). Regarding claim 35, Bhoyar teaches the system of claim 22. Bhoyar further teaches wherein the 3D colour scheme is RGB (Red, Green, Blue), HLS (Hue, Lightness, Saturation) or HVS (Hue, Value, Saturation) ([2.3 Computing JND Histogram] In this section, we propose an algorithm for computing histogram of a colour image in RGB space…Thus Table 1 contains the R, G, B coordinates and the respective frequency information or population (H) of the tri-colour stimulus, while Table 2 contains the x and y positional co-ordinates in the image and respective colour index (row index) in Table 1). Regarding claim 36, Bhoyar teaches the method of claim 31. Bhoyar further teaches wherein the 3D colour scheme is RGB (Red, Green, Blue), HLS (Hue, Lightness, Saturation) or HVS (Hue, Value, Saturation) ([2.3 Computing JND Histogram] In this section, we propose an algorithm for computing histogram of a colour image in RGB space…Thus Table 1 contains the R, G, B coordinates and the respective frequency information or population (H) of the tri-colour stimulus, while Table 2 contains the x and y positional co-ordinates in the image and respective colour index (row index) in Table 1). Regarding claim 37, Bhoyar teaches the system of claim 33. Bhoyar further teaches wherein the 3D colour scheme is RGB (Red, Green, Blue), HLS (Hue, Lightness, Saturation) or HVS (Hue, Value, Saturation) ([2.3 Computing JND Histogram] In this section, we propose an algorithm for computing histogram of a colour image in RGB space…Thus Table 1 contains the R, G, B coordinates and the respective frequency information or population (H) of the tri-colour stimulus, while Table 2 contains the x and y positional co-ordinates in the image and respective colour index (row index) in Table 1). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 3, 10-11, and 24 are rejected under 35 U.S.C. 103 as being unpatentable over Bhoyar in view of Radha (Radha, N., and M. Tech. "Comparison of contrast stretching methods of image enhancement techniques for acute leukemia images." Int. J. Eng. Res. Technol 1.6 (2012): 1-9). Regarding claim 3, Bhoyar teaches the method of claim 1. Radha, in the same field of endeavor of pixel color adjustment, teaches wherein the image is a medical image ([Abstract] medical professional using medical images to diagnose leukemia). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the method of Bhoyar with the teachings of Radha to use medical images because "there are blurness and effects of unwanted noise on blood leukaemia images that sometimes result in false diagnosis. Thus image pre-processing such as image enhancement techniques are needed to improve this situation. This project proposes several contrast enhancement techniques" [Radha Abstract]. Regarding claim 10, Bhoyar teaches the method of claim 9. Radha, in the same field of endeavor of pixel color adjustment, teaches wherein assigning colours to the image or part thereof, comprises: selecting a sequence of colours for assignment to the image or part thereof; determining a minimum intensity IMIN within the image or part thereof; determining a maximum intensity IMAX within the image or part thereof; and determining relative intensity values RIV(i) for each pixel or voxel i according to: R I V i = f I i - I M I N I M A X   - I M I N     where I(i) is an intensity of pixel or voxel i, and f is a preselected function; and assigning colours to at least some pixels or voxels in the image or part thereof based on the relative intensity values and an order of each of the colours in the sequence. PNG media_image1.png 417 466 media_image1.png Greyscale PNG media_image2.png 450 477 media_image2.png Greyscale Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the method of Bhoyar with the teachings of Radha to assign colors to pixels based on the calculated relative intensity because "Improvement in quality of medical images can be achieved by using Contrast stretching" [Radha pg. 2]. Regarding claim 11, Bhoyar and Radha teach the method of claim 10. Radha further teaches comprising: (i) selecting intensity values in the original image and applying the method only to pixels or voxels of the intensity values thus selected; (ii) selecting automatically the image or a part thereof based on one or more criteria; and/or (iii) generating a new image by colouring the image or part thereof according to the sequence of colours ([2.2.1 Local and global contrast stretching] Local contrast stretching (LCS) is an enhancement method performed on an image for locally adjusting each picture element value to improve the visualization of structures in both darkest and lightest portions of the image at the same time. LCS is performed by sliding windows (called the KERNEL) across the image and adjusting the center element. [3.1. Results for local contrast stretching] Figure 4, Figure 5, Figure 6 shows original the three images. Meanwhile, the results for each normal, bright and dark image for Local Contrast Stretching technique are shown in, Figure 7, Figure 8 and Figure 9). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the method of Bhoyar with the teachings of Radha to generate a new image by coloring according to a sequence of colors because "The resultant, images become clearer and the features of leukemia cells can easily been seen and improved from the original for each category. Nucleus and cytoplasm of immature white blood cells become clearer. Hence, they can easily been discussed by hematologists" [Radha pg. 4]. Regarding claim 24, Bhoyar teaches the system of claim 22. Radha, in the same field of endeavor of pixel color adjustment, teaches wherein the image is a medical image ([Abstract] medical professional using medical images to diagnose leukemia). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the system of Bhoyar with the teachings of Radha to use medical images because "there are blurness and effects of unwanted noise on blood leukaemia images that sometimes result in false diagnosis. Thus image pre-processing such as image enhancement techniques are needed to improve this situation. This project proposes several contrast enhancement techniques" [Radha Abstract]. Claims 4-5, 7, 25, 27, and 29 are rejected under 35 U.S.C. 103 as being unpatentable over Bhoyar in view of Jin (Jin, Liu, Fu Xiao, and Wang Haopeng. "Iris image segmentation based on K-means cluster." 2010 IEEE International Conference on Intelligent Computing and Intelligent Systems. Vol. 3. IEEE, 2010). Regarding claim 4, Bhoyar teaches the method of claim 1. Jin, in the same field of endeavor of pixel clustering, teaches wherein identifying the cluster of pixels or voxels comprises: selecting at least one pixel or voxel value within the image or part thereof; creating a pixel or voxel value profile curve from a reference point (xi, 0) corresponding to the at least one pixel or voxel value to all pixel or voxel values in a selected pixel or voxel value scheme ([B. Image segmentation based on K-Means clustering] A normalized gray-level co-occurrence histogram, similar to the color correlogram in Ref. [5], is proposed as a feature set for use in the K-Means clustering algorithm. For a pixel z∗=(x∗,y∗), let I(z∗) denote its gray intensity value. The gray levels of I are quantized into m bins); detecting a closest point (s1) on the pixel or voxel value profile curve to the reference point (xi, 0); locating a point of inflexion on the pixel or voxel value profile curve between the reference point (xi, 0) and the closest point (s1); and segmenting a portion of the pixel or voxel value profile curve between the reference point (xi, 0) and the point of inflexion, the segmented portion of the pixel or voxel value profile curve constituting a cluster of pixel or voxel values corresponding to the at least one pixel or voxel value ([B. Image segmentation based on K-Means clustering] We here introduce an adaptive quantization scheme of the gray levels that can accurately represent the gray-level distribution with much fewer bins. First the histogram with 64 uniform bins is computed in the eye region and is then filtered with a Gaussian. The significant peaks and valleys are extracted from the filtered histogram data. One peak together with its left- and right-hand side valleys automatically determine a bin subinterval. The first- and second-order derivatives of the filtered histogram are also computed to determine the inflection points, if any. Any subinterval containing inflection points should be further divided, selecting them as endpoints of smaller subintervals. Fig. 1 shows an example of adaptive quantization of gray levels, in which five bins are finally obtained (please note the inflection point denoted by o…In the clipped eye region, we can reasonably set K=4 because there are four approximately homogeneous regions corresponding to the iris, sclera, skin, and pupil and eyelashes (note that the two have similar gray levels). For each pixel, the m×m co-occurrence histogram is computed and used as a one-dimensional feature vector, and Bhattacharyya coefficient is adopted as distance measure to perform K-Means clustering. After segmentation a canny edge detector is applied to get an edge map of the eye region). PNG media_image3.png 732 957 media_image3.png Greyscale Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the method of Bhoyar with the teachings of Jin to segment between the reference point and point of inflection because "K-Means clustering based on the gray-level co-occurrence histogram [localizes] the limbic boundary" [Jin III. Segmentation Algorithm]. Regarding claim 5, Bhoyar and Jin teach the method of claim 4. Bhoyar further teaches wherein a length of the pixel or voxel value profile curve is the minimum perceptible pixel or voxel value difference ([2.2 Approximating the value of colour similarity threshold (JND)] For a perfectly uniform colour space the Euclidean distances between two colours is correlated with the perceptual colour difference. In such spaces (e.g. CIELAB to a considerable extent) the locus of colours which are not perceptually different from a given colour, forms a sphere with a radius equal to JND. As RGB space is not a perceptually uniform space, the colours that are indiscernible form the target colour, form a perceptually indistinguishable region with irregular shape. We have tried to derive approximate value of JND by 24 x 26 x 28 quantization of each of the R, G, and B axes respectively. Thus, such perceptually indistinguishable irregular regions are modelled by 3-D ellipsoids for practical purposes). Regarding claim 7, Bhoyar and Jin teach the method of claim 4. Jin teaches automatically identifying another cluster in the 3D colour referential map of pixels or voxels by automatically analysing another portion of the pixel or voxel value profile curve ([B. Image segmentation based on K-Means clustering] We here introduce an adaptive quantization scheme of the gray levels that can accurately represent the gray-level distribution with much fewer bins. First the histogram with 64 uniform bins is computed in the eye region and is then filtered with a Gaussian. The significant peaks and valleys are extracted from the filtered histogram data. One peak together with its left- and right-hand side valleys automatically determine a bin subinterval. The first- and second-order derivatives of the filtered histogram are also computed to determine the inflection points, if any. Any subinterval containing inflection points should be further divided, selecting them as endpoints of smaller subintervals. Fig. 1 shows an example of adaptive quantization of gray levels, in which five bins are finally obtained (please note the inflection point denoted by o…In the clipped eye region, we can reasonably set K=4 because there are four approximately homogeneous regions corresponding to the iris, sclera, skin, and pupil and eyelashes (note that the two have similar gray levels). For each pixel, the m×m co-occurrence histogram is computed and used as a one-dimensional feature vector, and Bhattacharyya coefficient is adopted as distance measure to perform K-Means clustering. After segmentation a canny edge detector is applied to get an edge map of the eye region). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the method of Bhoyar with the teachings of Jin to automatically identify another cluster because "K-Means clustering based on the gray-level co-occurrence histogram [localizes] the limbic boundary" [Jin III. Segmentation Algorithm]. Regarding claim 25, Bhoyar teaches the system of claim 22. Jin, in the same field of endeavor of pixel clustering, teaches wherein the processor is further configured to: identify or select at least one pixel or voxel value within the image; create a pixel or voxel value profile curve from a reference point (xi, 0) corresponding to the at least one pixel or voxel value to all pixel or voxel values in a selected pixel or voxel value scheme ([B. Image segmentation based on K-Means clustering] A normalized gray-level co-occurrence histogram, similar to the color correlogram in Ref. [5], is proposed as a feature set for use in the K-Means clustering algorithm. For a pixel z∗=(x∗,y∗), let I(z∗) denote its gray intensity value. The gray levels of I are quantized into m bins); and analyze the pixel or voxel value profile curve by detecting a closest point (s1) on the pixel or voxel value profile curve to the reference point (xi, 0); locating a point of inflexion on the pixel or voxel value profile curve between the reference point (xi, 0) and the closest point (s1); and segmenting a portion of the pixel or voxel value profile curve between the reference point (xi, 0) and the point of inflexion, the segmented portion of the pixel or voxel value profile curve constituting a cluster in the 3D colour referential map of pixel or voxel values corresponding to the at least one pixel or voxel value ([B. Image segmentation based on K-Means clustering] We here introduce an adaptive quantization scheme of the gray levels that can accurately represent the gray-level distribution with much fewer bins. First the histogram with 64 uniform bins is computed in the eye region and is then filtered with a Gaussian. The significant peaks and valleys are extracted from the filtered histogram data. One peak together with its left- and right-hand side valleys automatically determine a bin subinterval. The first- and second-order derivatives of the filtered histogram are also computed to determine the inflection points, if any. Any subinterval containing inflection points should be further divided, selecting them as endpoints of smaller subintervals. Fig. 1 shows an example of adaptive quantization of gray levels, in which five bins are finally obtained (please note the inflection point denoted by o…In the clipped eye region, we can reasonably set K=4 because there are four approximately homogeneous regions corresponding to the iris, sclera, skin, and pupil and eyelashes (note that the two have similar gray levels). For each pixel, the m×m co-occurrence histogram is computed and used as a one-dimensional feature vector, and Bhattacharyya coefficient is adopted as distance measure to perform K-Means clustering. After segmentation a canny edge detector is applied to get an edge map of the eye region). PNG media_image3.png 732 957 media_image3.png Greyscale Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the system of Bhoyar with the teachings of Jin to segment between the reference point and point of inflection because "K-Means clustering based on the gray-level co-occurrence histogram [localizes] the limbic boundary" [Jin III. Segmentation Algorithm]. Regarding claim 27, Bhoyar and Jin teach the system of claim 25. Bhoyar further teaches wherein a length of the pixel or voxel value profile curve is the minimum perceptible pixel or voxel value difference ([2.2 Approximating the value of colour similarity threshold (JND)] For a perfectly uniform colour space the Euclidean distances between two colours is correlated with the perceptual colour difference. In such spaces (e.g. CIELAB to a considerable extent) the locus of colours which are not perceptually different from a given colour, forms a sphere with a radius equal to JND. As RGB space is not a perceptually uniform space, the colours that are indiscernible form the target colour, form a perceptually indistinguishable region with irregular shape. We have tried to derive approximate value of JND by 24 x 26 x 28 quantization of each of the R, G, and B axes respectively. Thus, such perceptually indistinguishable irregular regions are modelled by 3-D ellipsoids for practical purposes). Regarding claim 29, Bhoyar and Jin teach the system of claim 25. Jin teaches identify another cluster in the 3D colour referential map of pixels or voxels by automatically analysing another portion of the pixel or voxel value profile curve ([B. Image segmentation based on K-Means clustering] We here introduce an adaptive quantization scheme of the gray levels that can accurately represent the gray-level distribution with much fewer bins. First the histogram with 64 uniform bins is computed in the eye region and is then filtered with a Gaussian. The significant peaks and valleys are extracted from the filtered histogram data. One peak together with its left- and right-hand side valleys automatically determine a bin subinterval. The first- and second-order derivatives of the filtered histogram are also computed to determine the inflection points, if any. Any subinterval containing inflection points should be further divided, selecting them as endpoints of smaller subintervals. Fig. 1 shows an example of adaptive quantization of gray levels, in which five bins are finally obtained (please note the inflection point denoted by o…In the clipped eye region, we can reasonably set K=4 because there are four approximately homogeneous regions corresponding to the iris, sclera, skin, and pupil and eyelashes (note that the two have similar gray levels). For each pixel, the m×m co-occurrence histogram is computed and used as a one-dimensional feature vector, and Bhattacharyya coefficient is adopted as distance measure to perform K-Means clustering. After segmentation a canny edge detector is applied to get an edge map of the eye region). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the system of Bhoyar with the teachings of Jin to automatically identify another cluster because "K-Means clustering based on the gray-level co-occurrence histogram [localizes] the limbic boundary" [Jin III. Segmentation Algorithm]. Claim 28 is rejected under 35 U.S.C. 103 as being unpatentable over Bhoyar in view of Jin and Valavanis (Valavanis, Kimon P., J. Zheng, and George Paschos. "A total color difference measure for segmentation in color images." Journal of Intelligent and Robotic Systems 16.3 (1996): 269-313). Regarding claim 28, Bhoyar and Jin teach the system of claim 27. Valavanis, in the same field of endeavor of image color analysis teaches wherein i) the analyser subdivides a length of the pixel or voxel value profile curve between the reference point (xi, 0) and a point corresponding to the minimum perceptible pixel or voxel value difference into a set of pixel or voxel values; and/or ii) the analyser subdivides a length of the pixel or voxel value profile curve between the reference point (xi, 0) and a point corresponding to the minimum perceptible pixel or voxel value difference into a set of pixel or voxel values, and the length of the pixel or voxel value profile curve between the reference point (x,, 0) and a point corresponding to the minimum perceptible pixel or voxel value difference is equal to the minimum perceptible pixel or voxel value difference for a pixel or voxel value ([3. Review of MacAdam's Ellipses] Each curve represents the locus of colors at a constant "distance" or just perceptible chromaticity difference. Colors inside the closed curve are indistinguishable. Colors just outside the closed curve are JND. Each of these curves has been fairly approximated by an ellipse. [4. The Total Color Difference Measurement] To show how the luminance component is accounted for in matching colors, observe Figure 2(a) and (b). In Figure 2(a), there are five matching chromaticity points P1, P2, P3, P4, P5 and three concentric ellipses, the smallest of which represents a one-unit MacAdam ellipse centered at Po as a chosen order. Points lying on the smallest ellipse differ in chromaticity by one unit from Po; this unit represents one-unit JND. Thus P1 resides half way from the center of MacAdam's ellipse, both P2 and P3 lie 1.5 units JND away from the center P0 and P4, P5 are 2.5 units JND away from the center Po. [5. The Color Image Segmentation Algorithm] The purpose of image segmentation is to produce a partition of a given image into nonoverlapping regions. Since images are represented by pixels, a group sharing some similar characteristics can form a region. In this section, we use the proposed total color difference (TCD) measurement to classify the pixels…The TCD measurement together with the statistically estimated MacAdam ellipse are used in the segmentation process as shown in Figure 3. The segmentation algorithm has two versions, namely, region growing and pixel analysis, and can be used to extract a single object or detect more than one similar object that is spatially separated, thus, extracting them from the background). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the system of Bhoyar with the teachings of Valavanis to subdivide the pixel profile curve between the reference point and a point corresponding to the minimum perceptible pixel difference into a set of pixel values because "Color images provide more information for object recognition and may simplify problems related to image segmentation and scene interpretation. Color as an identifying feature that is local and largely independent of view and resolution may be efficiently used to segment reliably an image/scene" [Valavanis 1. Introduction]. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Jacqueline R Zak whose telephone number is (571)272-4077. The examiner can normally be reached M-F 9-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emily Terrell can be reached at (571) 270-3717. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JACQUELINE R ZAK/Examiner, Art Unit 2666 /EMILY C TERRELL/Supervisory Patent Examiner, Art Unit 2666
Read full office action

Prosecution Timeline

Jan 08, 2021
Application Filed
Dec 31, 2022
Non-Final Rejection — §102, §103, §DP
Jun 12, 2023
Response Filed
Jul 27, 2023
Final Rejection — §102, §103, §DP
Sep 28, 2023
Request for Continued Examination
Sep 29, 2023
Response after Non-Final Action
Oct 19, 2023
Non-Final Rejection — §102, §103, §DP
Feb 23, 2024
Response Filed
Mar 23, 2024
Final Rejection — §102, §103, §DP
Jul 01, 2024
Request for Continued Examination
Jul 03, 2024
Response after Non-Final Action
Sep 29, 2024
Non-Final Rejection — §102, §103, §DP
Feb 04, 2025
Response Filed
Mar 05, 2025
Non-Final Rejection — §102, §103, §DP
May 30, 2025
Response Filed
Aug 04, 2025
Final Rejection — §102, §103, §DP
Feb 05, 2026
Request for Continued Examination
Feb 20, 2026
Response after Non-Final Action
Mar 23, 2026
Non-Final Rejection — §102, §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586340
PIXEL PERSPECTIVE ESTIMATION AND REFINEMENT IN AN IMAGE
2y 5m to grant Granted Mar 24, 2026
Patent 12462343
MEDICAL DIAGNOSTIC APPARATUS AND METHOD FOR EVALUATION OF PATHOLOGICAL CONDITIONS USING 3D OPTICAL COHERENCE TOMOGRAPHY DATA AND IMAGES
2y 5m to grant Granted Nov 04, 2025
Patent 12373946
ASSAY READING METHOD
2y 5m to grant Granted Jul 29, 2025
Study what changed to get past this examiner. Based on 3 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

8-9
Expected OA Rounds
67%
Grant Probability
55%
With Interview (-11.4%)
2y 10m
Median Time to Grant
High
PTA Risk
Based on 12 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month