Prosecution Insights
Last updated: April 19, 2026
Application No. 18/663,363

IMAGE READING APPARATUS FOR READING A DOCUMENT, AND IMAGE FORMING SYSTEM

Non-Final OA §102§103
Filed
May 14, 2024
Examiner
ZHENG, JACKY X
Art Unit
2681
Tech Center
2600 — Communications
Assignee
Canon Kabushiki Kaisha
OA Round
1 (Non-Final)
80%
Grant Probability
Favorable
1-2
OA Rounds
2y 6m
To Grant
97%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
667 granted / 837 resolved
+17.7% vs TC avg
Strong +17% interview lift
Without
With
+17.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
21 currently pending
Career history
858
Total Applications
across all art units

Statute-Specific Performance

§101
8.1%
-31.9% vs TC avg
§103
49.9%
+9.9% vs TC avg
§102
28.7%
-11.3% vs TC avg
§112
11.3%
-28.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 837 resolved cases

Office Action

§102 §103
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This is an initial office action in response to communication(s) filed on May 14, 2024. Claims 1-17 are pending. Information Disclosure Statement The information disclosure statement (IDS) submitted on May 14, 2024 was filed in compliance with the provisions of 37 CFR 1.97 and 1.98. Accordingly, the information disclosure statement is being considered by the examiner. Specification The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-5, 12 and 15-16 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Tsuchiya et al. (U.S. Pub. No. 2021/0218865 A1, hereinafter as “Tsuchiya”). With regard to claim 1, the claim is drawn to an image reading apparatus (see Tsuchiya, i.e. in fig. 1, disclose the MFP (image processing apparatus) 200) that reads a document, comprising: a reading unit (see Tsuchiya, i.e. in fig. 1, in para. 26 and etc., disclose the scanner unit 300) comprising: a light source configured to emit light to the document (see Tsuchiya, i.e. fig. 3A, 3B, para. 39 and etc., disclose the light source 311); a sensor configured to receive the light reflected by the document and output an analog signal based on a light reception result of the reflected light (see Tsuchiya, i.e. fig. 3A, 3B, para. 39 and etc., disclose the reading sensor 304); and a converter configured to convert the analog signal output by the sensor into a digital signal, and output the digital signal obtained as a result of the conversion by the converter (see Tsuchiya, i.e. in fig. 1, in para. 31 and etc., disclose that “[0031] The scanner unit 300 has a conveyance unit 302 for conveying a document, the sensor unit 301 for reading a document, and an analog front end 303 (hereinafter AFE) for converting an analog signal detected by the sensor unit 301 into a digital signal. The conveyance unit 302 may be configured to move the sensor unit 301 relative to a positioned document…”), and a controller configured to: control the reading unit to perform a plurality of readings of the document (see Tsuchiya, i.e. in fig. 1, para. 30 and etc., disclose the scanner controller 206, and “[0030] A scanner controller 206 controls the scanner unit 300 under instructions from the CPU 201. Read data acquired by the scanner unit 300 is stored in the RAM 203…”); and change a reading condition of the reading unit so that a value of a first digital signal to be acquired in a first reading among the plurality of readings and a value of a second digital signal to be acquired in a second reading among the plurality of readings differ from one another (see Tsuchiya, i.e. in fig. 8, para. 66-68 and etc., disclose that “[0066] If the processing is started, in S801, the scanner image processing unit 207 first acquires read data of the first (J=1) reading sensor 304 of the N (5 in the present embodiment) reading sensors 304 arrayed in the X direction. In S802, the scanner image processing unit 207 sets a variable J at J=2. [0067] In S803, the scanner image processing unit 207 acquires read data of the J-th reading sensor 304. In S804, the scanner image processing unit 207 performs the merging processing for read data of the (J−1)-th reading sensor 304 and the read data of the J-th reading sensor 304. More specifically, the scanner image processing unit 207 uses the first mask pattern 701 and the second mask pattern 702 to determine adoption or non-adoption of detection values in an area corresponding to an overlapping area. [0068] In S805, the scanner image processing unit 207 stores, in a long memory secured in the RAM 203, read data on the area for which the merging processing is performed and detection values are determined in S804…”). With regard to claim 2, the claim is drawn to the image reading apparatus according to claim 1, wherein the reading condition is a light emission intensity of the light source (see Tsuchiya, i.e. in para. 42 and etc., disclose that “… Light reflected from the surface of the document D passes through the document glass 309 again and is received by the light receiving elements 313 arrayed in the X direction. Each light receiving element 313 stores electric charge corresponding to the intensity of the received light. The stored electric charge is converted into a digital signal value by the AFE 303 in synchronization with a timing instructed by the scanner controller 206, that is, a timing corresponding to one pixel. The scanner controller 206 stores the converted digital signal values in the RAM 203 per color and pixel. The read data stored in the RAM 203 is then subjected to predetermined image processing by the scanner image processing unit 207 (see FIG. 1). Specific image processing performed by the scanner image processing unit 207 will be described later in detail….”). With regard to claim 3, the claim is drawn to the image reading apparatus according to claim 1, wherein the reading condition is an accumulation time of the reflected light by the sensor (see Tsuchiya, i.e. in para. 42 and etc., disclose that “… Light reflected from the surface of the document D passes through the document glass 309 again and is received by the light receiving elements 313 arrayed in the X direction. Each light receiving element 313 stores electric charge corresponding to the intensity of the received light. The stored electric charge is converted into a digital signal value by the AFE 303 in synchronization with a timing instructed by the scanner controller 206, that is, a timing corresponding to one pixel. The scanner controller 206 stores the converted digital signal values in the RAM 203 per color and pixel. The read data stored in the RAM 203 is then subjected to predetermined image processing by the scanner image processing unit 207 (see FIG. 1). Specific image processing performed by the scanner image processing unit 207 will be described later in detail….”). With regard to claim 4, the claim is drawn to the image reading apparatus according to claim 1, wherein the reading condition is a gain of the sensor (see Tsuchiya, i.e. in para. 73 and etc., disclose that “[0073] The granularity becomes apparent in the case of reading a high concentration image with a small quantity of reflected light. Further, since the amplitude and period of noise are different for each reading sensor, granularity in a read image also have different features for areas corresponding to the respective reading sensors…”). With regard to claim 5, the claim is drawn to the image reading apparatus according to claim 1, wherein the sensor includes a first light-receiving element that receives light from the document through a red filter, a second light-receiving element that receives light from the document through a green filter, and a third light-receiving element that receives light from the document through a blue filter, and the analog signal is output from each of the first, second, and third light-receiving elements (see Tsuchiya, i.e. in para.116 and etc., disclose that “[0116] The embodiments described above use the reading sensor 304 in which CISs are arrayed as the light receiving elements 313. However, CCDs can also be used as the light receiving elements 313. In the case of using CCDs, there is no shift in spectral characteristics caused by light sources as described with reference to FIG. 6, whereas a shift occurs in color characteristics of three color filters. That is, even in the case of using a reading sensor with an array of CCDs, the same problem as the case of using a reading sensor with an array of CISs occurs and the embodiments described above function effectively…”). With regard to claim 12, the claim is drawn to the image reading apparatus according to claim 1, wherein the controller determines a first reading condition to be used in the first reading of the document, and determines whether or not to perform the second reading based on the first digital signal obtained as a reading result of the document by the reading unit based on the first reading condition (see Tsuchiya, i.e. in fig. 8, step S807, in para. 66, 69 and etc., disclose that “[0069] After that, the scanner image processing unit 207 increments the variable J in S806 and determines whether J≤N in S807. In a case of J≤N, the scanner image processing unit 207 returns to S803 and repeats the processing for the J-th reading sensor 304. In a case of J>N in S807, the processing is finished…”). With regard to claim 15, the claim is drawn to the image reading apparatus according to claim 1, wherein the controller is configured to set a first reading condition to be used in the first reading of the document (see Tsuchiya, i.e. in fig. 8, step S802), and determine a second reading condition to be used in the second reading based on a first digital signal obtained as a reading result of the document by the reading unit based on the first reading condition (see Tsuchiya, i.e. in fig. 8, step S806, para. 69 and etc.). With regard to claim 16, the claim is drawn to an image forming system (see Tsuchiya, i.e. in fig. 1 and etc) comprising: an image reading apparatus configured to read a document (see Tsuchiya, i.e. in fig. 1, disclose the scanner unit 300); an image forming apparatus configured to output a test printed object by forming an image on a sheet based on first image data of a reference printed object (see Tsuchiya, i.e. in fig. 1, disclose the printer unit 400); and a processing apparatus configured to generate second image data to be used for image forming by the image forming apparatus, the second image data being generated based on a first read image signal obtained as a result of the image reading apparatus reading the test printed object and a second read image signal obtained as a result of the image reading apparatus reading the reference printed object (see Tsuchiya, i.e. fig. 15, para. 113 and etc., discloses that “[0113] FIG. 15 is a diagram showing an information processing system in which a scanner 600 having the scanner function, a printer 800 having the print function, and the information processing apparatus 100 are connected to one another by the network 107. Even in the form shown in FIG. 15, the scanner function, the print function, and the copy function can be implemented through cooperation of the information processing apparatus 100, the scanner 600, and the printer 800. In this form, the information processing apparatus 100 can concurrently execute scanner operation in the scanner 600 and print operation in the printer 800. In this form, the scanner 600 which performs the series of image processing described in the above embodiments for read data acquired by the scanner unit 300 corresponds to an image processing apparatus of the present invention…”), wherein the image reading apparatus comprises: a reading unit comprising a light source that is configured to emit light to the document, a sensor that is configured to receive the light reflected by the document and output an analog signal based on a light reception result of the reflected light, and a converter that is configured to convert the analog signal output by the sensor into a digital signal, the reading unit being configured to output the digital signal obtained as a result of the conversion by the converter; and a controller configured to: control the reading unit to perform a plurality of readings of the document; and change a reading condition of the reading unit so that a value of a first digital signal to be acquired in a first reading among the plurality of readings and a value of a second digital signal to be acquired in a second reading among the plurality of readings differ from one another (see discussions of claim 1 over Tsuchiya set forth above, also incorporated by reference herein). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 6, 9-11 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Tsuchiya as applied to claim 1 above, and further in view of Sakatani (U.S. Pub. No. 2015/0350493 A1). With regard to claim 6, the claim is drawn to the image reading apparatus according to claim 1, wherein the reading condition includes a first reading condition that is determined so that a signal value of the analog signal output by the sensor when the document is read by the reading unit is not saturated, and a second reading condition that is determined so that the signal value of the analog signal output by the sensor when the document is read by the reading unit is greater than the signal value of the analog signal when the reading unit reads the document based on the first reading condition. The teachings of Tsuchiya do not explicitly disclose the every aspects relating to “wherein the reading condition includes a first reading condition that is determined so that a signal value of the analog signal output by the sensor when the document is read by the reading unit is not saturated, and a second reading condition that is determined so that the signal value of the analog signal output by the sensor when the document is read by the reading unit is greater than the signal value of the analog signal when the reading unit reads the document based on the first reading condition.”. However, Sakatani discloses an analogous invention relates to image reading apparatus and method. More specifically, in Sakatani, i.e. in para. 58, 93 and etc., disclose that “[0093] For example, it is possible to match a condition when reading out an image by the colorimeter 30 to a condition of reading by the line sensor 40 by matching opaqueness, surface condition, whether to include a fluorescent material, saturation degree, luminosity degree and such like as the physical property of backing members 50 and 60…”. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Tsuchiya to include the limitation(s) discussed and also taught by Sakatani, with the aspects discussed above, as the cited prior arts are at least considered to be analogous arts if not also in the same field of endeavor relating to image processing arts. Further, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Tsuchiya by the teachings of Sakatani, and to incorporate the limitation(s) discussed and also taught by Sakatani, thereby to have “… assurance of color absolute values especially in color image forming apparatuses” (see Sakatani, i.e. para. 6 and etc.). With regard to claim 9, the claim is drawn to the image reading apparatus according to claim 1, wherein the controller determines, based on the first digital signal, a luminance value that is higher than or equal to a threshold value and indicated by a reading result of the document, and determines, based on the second digital signal, a luminance value that is lower than the threshold value and indicated by the reading result of the document by the reading unit (see Sakatani, i.e. in para. 58, 93 and etc., disclose that “[0093] For example, it is possible to match a condition when reading out an image by the colorimeter 30 to a condition of reading by the line sensor 40 by matching opaqueness, surface condition, whether to include a fluorescent material, saturation degree, luminosity degree and such like as the physical property of backing members 50 and 60…”). With regard to claim 10, the claim is drawn to the image reading apparatus according to claim 9, wherein the threshold value is determined based on a reading condition used in the first reading and a reading condition used in the second reading (in Sakatani, i.e. in para. 58, 93 and etc., disclose that “[0093] For example, it is possible to match a condition when reading out an image by the colorimeter 30 to a condition of reading by the line sensor 40 by matching opaqueness, surface condition, whether to include a fluorescent material, saturation degree, luminosity degree and such like as the physical property of backing members 50 and 60…”). With regard to claim 11, the claim is drawn to The image reading apparatus according to claim 10, wherein the reading condition used in the second reading is a condition such that, if a color of a pixel in the document whose luminance value equals the threshold value is read, a signal value indicating the luminance value of the color of the pixel would be saturated in the first digital signal output by the sensor (see Sakatani, i.e. in para. 58, 93 and etc., disclose that “[0093] For example, it is possible to match a condition when reading out an image by the colorimeter 30 to a condition of reading by the line sensor 40 by matching opaqueness, surface condition, whether to include a fluorescent material, saturation degree, luminosity degree and such like as the physical property of backing members 50 and 60…”. With regard to claim 17, the claim is drawn to an image forming system (see Tsuchiya, i.e. in fig. 1 and etc) comprising: an image reading apparatus configured to read a document (see Tsuchiya, i.e. in fig. 1, disclose the scanner unit 300); and an image forming apparatus configured to output a color adjustment chart by forming an image on a sheet based on image data of the color adjustment chart (see Tsuchiya, i.e. in fig. 1, disclose the printer unit 400; also see teachings of Sakatani), wherein the image forming apparatus controls an image forming condition based on a read image signal obtained as a result of the image reading apparatus reading the color adjustment chart formed on the sheet, and the image reading apparatus comprises: a reading unit comprising a light source that is configured to emit light to the document, a sensor that is configured to receive the light reflected by the document and output an analog signal based on a light reception result of the reflected light, and a converter that is configured to convert the analog signal output by the sensor into a digital signal, the reading unit being configured to output the digital signal obtained as a result of the conversion by the converter; and a controller configured to: control the reading unit to perform a plurality of readings of the document; and change a reading condition of the reading unit so that a value of a first digital signal to be acquired in a first reading among the plurality of readings and a value of a second digital signal to be acquired in a second reading among the plurality of readings differ from one another (see discussions of claim 1 over Tsuchiya set forth above, also incorporated by reference herein). The teachings of Tsuchiya do not explicitly disclose the aspect relating to “an image forming apparatus configured to output a color adjustment chart by forming an image on a sheet based on image data of the color adjustment chart”. However, in Sakatani, i.e. in para. 12, 16, 52, 74-79, fig. 3-4 and etc., disclose “…[0012] In order to achieve one of the above objects, according to one aspect of the present invention, there is provided an image reading apparatus including: two image reading devices which are different from each other, read out a same surface of a same sheet after image formation on a sheet conveyance path and read out a plurality of common color patches formed in the same surface of the same sheet, one of the image reading devices being a first image reading device which reads out only a partial region in a main scanning direction and the other of the image reading devices being a second image reading device which reads out over an image formation width in the main scanning direction; and a calculation section which estimates a value equivalent to reading information of the first image reading device from reading information of the second image reading device on the basis of reading information obtained by reading out the common color patches, wherein each of a backing member used for reading by the first image reading device and a backing member used for reading by the second image reading device is formed of a member having a same physical property…”. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Tsuchiya to include the limitation(s) discussed and also taught by Sakatani, with the aspects discussed above, as the cited prior arts are at least considered to be analogous arts if not also in the same field of endeavor relating to image processing arts. Further, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Tsuchiya by the teachings of Sakatani, and to incorporate the limitation(s) discussed and also taught by Sakatani, thereby to have “… assurance of color absolute values especially in color image forming apparatuses” (see Sakatani, i.e. para. 6 and etc.). Claim(s) 7-8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Tsuchiya as applied to claim 1 above, and further in view of Funakoshi et al. (U.S. Pub. No. 2001/0040707 A1, hereafter as “Funakoshi”). With regard to claim 7, the claim is drawn to the image reading apparatus according to claim 1 further comprising a reference member to be read by the reading unit, wherein the controller determines a reading condition to be used in the first reading based on an analog signal output by the sensor when the reference member is read by the reading unit. The teachings of Tsuchiya do not explicitly disclose the aspect relating to “a reference member to be read by the reading unit, wherein the controller determines a reading condition to be used in the first reading based on an analog signal output by the sensor when the reference member is read by the reading unit.”. However, Funakoshi discloses an analogous invention relates to image scanning apparatus. More specifically, i.e. in Funakoshi, i.e. in fig. 10, in para. 83 and etc., disclose that “[0083] The luminance value of one-line read by the image sensor 301 is standardized by the luminance value of the white reference 204 at the corresponding pixel position and the number of process gradations (for example, 64 gradations) is calculated, thereby determining the luminance of the pixel. Normally, the luminance value and the density value have a relation shown in FIG. 14, where the abscissa indicates luminance and the ordinate indicates density. Regarding the same original, when the original is read from the ADF 401, the luminance becomes greater in comparison with the case where the original is read from the original stacking portion 501. Accordingly, if the luminance/density conversion is effected on the basis of the identical luminance/density conversion table, the density values converted will be changed, even regarding the same original…”. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Tsuchiya to include the limitation(s) discussed and also taught by Sakatani, with the aspect(s) discussed above, as the cited prior arts are at least considered to be analogous arts if not also in the same field of endeavor relating to image processing arts. Further, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Tsuchiya by the teachings of Sakatani, and to incorporate the limitation(s) discussed and also taught by Sakatani, thereby “…to provide an image reading apparatus in which the reading can be effected with high accuracy” (see Funakoshi, in para. 13 and etc.). With regard to claim 8, the claim is drawn to the image reading apparatus according to claim 1 further comprising a reference member to be read by the reading unit, wherein the controller determines a reading condition to be used in the first reading based on a digital signal relating to a reading result of the reference member that is obtained by the conversion by the converter when the reference member is read by the reading unit (see in Funakoshi, i.e. in fig. 10, in para. 83 and etc., disclose that “[0083] The luminance value of one-line read by the image sensor 301 is standardized by the luminance value of the white reference 204 at the corresponding pixel position and the number of process gradations (for example, 64 gradations) is calculated, thereby determining the luminance of the pixel. Normally, the luminance value and the density value have a relation shown in FIG. 14, where the abscissa indicates luminance and the ordinate indicates density. Regarding the same original, when the original is read from the ADF 401, the luminance becomes greater in comparison with the case where the original is read from the original stacking portion 501. Accordingly, if the luminance/density conversion is effected on the basis of the identical luminance/density conversion table, the density values converted will be changed, even regarding the same original…”). Allowable Subject Matter With regard to Claims 13-14, claims are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims and overcoming the corresponding rejections and/or objection (if any) set forth in the Office Action above. The following is a statement of reasons for the indication of allowable subject matter: With regard to claim 13, the closest prior arts of record, Tsuchiya, Sakatani and Funakoshi, do not disclose or suggest, among the other limitations, the additional required limitation of “… the image reading apparatus according to claim 1, wherein the sensor includes a first light-receiving element that receives light from the document through a red filter, a second light-receiving element that receives light from the document through a green filter, and a third light-receiving element that receives light from the document through a blue filter, the analog signal is output from each of the first, second, and third light-receiving elements, and the controller does not perform the second reading if the first digital signal corresponding to the first light-receiving element, the first digital signal corresponding to the second light-receiving element, and the first digital signal corresponding to the third light-receiving element are all higher than or equal to a predetermined value”. These additional features in combination with all the other features required in the claimed invention, are neither taught nor suggested by prior art(s) of record. With regard to claim 13, the closest prior arts of record, Tsuchiya, Sakatani and Funakoshi, do not disclose or suggest, among the other limitations, the additional required limitation of “… the image reading apparatus according to claim 1, wherein the sensor includes a first light-receiving element that receives light from the document through a red filter, a second light-receiving element that receives light from the document through a green filter, and a third light-receiving element that receives light from the document through a blue filter, the analog signal is output from each of the first, second, and third light-receiving elements, and the controller does not perform the second reading if the first digital signal corresponding to the first light-receiving element, the first digital signal corresponding to the second light-receiving element, and the first digital signal corresponding to the third light-receiving element are all higher than or equal to a predetermined value”. These additional features in combination with all the other features required in the claimed invention, are neither taught nor suggested by prior art(s) of record. With regard to claims 14, the claims are depending directly or indirectly from the independent Claim 13, each encompasses the required limitations recited in the independent claim discussed above. Therefore, claims 13-14 are objected to. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Sato et al. (U.S. Pat/Pub No. 2023/0308575 A1) disclose an invention relates to an image reading device reading an image, an image forming apparatus including the image reading device, a non-transitory computer readable medium storing an image reading program, and an image reading method. The Art Unit (or Workgroup) location of your application in the USPTO has changed. To aid in correlating any papers for this application, all further correspondence regarding this application should be directed to Art Unit 2681. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Jacky X. Zheng whose telephone number is (571) 270-1122. The examiner can normally be reached on Monday - Friday, 9:00 am - 5:00 pm, alt. Friday Off. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Akwasi Sarpong can be reached on (571) 272-3438. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JACKY X ZHENG/Primary Examiner, Art Unit 2681
Read full office action

Prosecution Timeline

May 14, 2024
Application Filed
Mar 18, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12594150
CLIP FOR COUPLING TO SCAN BODY FOR ACCURATE INTRAORAL SCANNING
2y 5m to grant Granted Apr 07, 2026
Patent 12593073
POINT CLOUD ENCODING AND DECODING METHOD AND DEVICE BASED ON TWO-DIMENSIONAL REGULARIZATION PLANE PROJECTION
2y 5m to grant Granted Mar 31, 2026
Patent 12584858
Rapid fresh digital-pathology method
2y 5m to grant Granted Mar 24, 2026
Patent 12587605
SERVICE PROVIDING SYSTEM WITH SYNCHRONIZATION OF ATTRIBUTE DATA
2y 5m to grant Granted Mar 24, 2026
Patent 12581046
PATHOLOGY REVIEW STATION
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
80%
Grant Probability
97%
With Interview (+17.2%)
2y 6m
Median Time to Grant
Low
PTA Risk
Based on 837 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month