Prosecution Insights
Last updated: April 19, 2026
Application No. 18/641,184

METHOD FOR DETERMINING LESION REGION, AND MODEL TRAINING METHOD AND APPARATUS

Non-Final OA §103
Filed
Apr 19, 2024
Examiner
DEPALMA, CAROLINE ELIZABETH
Art Unit
2675
Tech Center
2600 — Communications
Assignee
Tencent Technology (Shenzhen) Company Limited
OA Round
1 (Non-Final)
88%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 88% — above average
88%
Career Allow Rate
37 granted / 42 resolved
+26.1% vs TC avg
Strong +16% interview lift
Without
With
+15.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
16 currently pending
Career history
58
Total Applications
across all art units

Statute-Specific Performance

§101
18.4%
-21.6% vs TC avg
§103
29.9%
-10.1% vs TC avg
§102
20.5%
-19.5% vs TC avg
§112
26.7%
-13.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 42 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statements (IDS) submitted on 04/26/2024, 07/14/2025, 10/06/2025 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner with the exception of the following references because copies of these references have not been received: NPL Document 1 from 07/14/2025 IDS: "Tencent Technology, WO+IPRP, PCT/CN2023/102592, 01MAR2025, 5 pgs." Foreign Patent Document 3 from 04/26/2024 IDS: CN 109523535 A, 03-26-2019, Beijing Friendship Hospital, Capital Medical University Specification The abstract of the disclosure is objected to because it exceeds 150 words in length. A corrected abstract of the disclosure is required and must be presented on a separate sheet, apart from any other text. See MPEP § 608.01(b). Claim Objections Claim 15 is objected to because of the following informalities: In claim 15, the phrase “a computer service” should read “a computer device”. Appropriate correction is required. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-3, 5-6, 8-10, 12-13, 15-17, 19-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhou (CN 109919928 A) in view of Feng (US 20170213071 A1). Regarding claim 1, Zhou discloses a method for determining a lesion region in a pathological image performed by a computer device ([0059] a computer device for performing a method; [0057] method including detecting regions within a medical image and indicating information of the region; [0206] wherein the region is a lesion of a specific pathology), and the method comprising: sampling a pathological image by a first sampling way to obtain at least two first instance images ([0094] obtain candidate regions (i.e. first instance images); [0095]-[0096] using a candidate region network including convolutional layers); determining a candidate lesion region in the pathological image, based on feature information extracted from the at least two first instance images ([0097] candidate regions which meet a condition based on their extracted feature information (e.g. which have a region type probability greater than a threshold) are selected (i.e. as candidate lesion region)); sampling the candidate lesion region by a second sampling way to obtain at least two second instance images ([0098]-[0099] using a classification and regression network including ROI pooling and fully connected layers, wherein several regions of interest can be selected from the candidate regions (i.e. as second instance images)), determining lesion indication information of the pathological image, based on feature information extracted from the at least two second instance images, wherein the lesion indication information indicates the lesion region in the pathological image ([0099] yields the detection region and its location (i.e. location of the lesion region); [0098] the region being a lesion region). Zhou fails to disclose an overlap degree between the second instance images being greater than that between the first instance images. Feng, in a related system from the same field of endeavor of detecting region of interest from a target image (Abstract), discloses an overlap degree between the second instance images being greater than that between the first instance images (Fig. 1, Fig. 3; [0064] The detection apparatus may perform the coarse scanning on the target image while moving a first sliding window 302 at an interval of a first step length 301. As further discussed above, the first step length may be greater than a second step length 304 adopted in a fine scanning (i.e. because the sliding window moves a smaller interval between scans, the scans have a higher overlap with each other at the second step length); [0065]-[0066] candidate areas are identified in first scan images scanned by the first sliding window, and the second sliding window scans second scan images (i.e. candidate area) to detect a face). It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to combine Feng with Zhou and utilize and overlap degree between the second instance images being greater than that between the first instance images, as disclosed by Feng, as part of a method for determining a lesion region in a pathological image performed by a computer device, as disclosed by Zhou, for the purpose of improved detection accuracy, efficiency, and performance of the system in region detection in images (See Feng: [0062]). Regarding claim 2, Zhou in view of Feng discloses the method according to claim 1 as applied above. Zhou further discloses wherein the determining a candidate lesion region in the pathological image, based on feature information extracted from the at least two first instance images comprises: performing feature encoding on each first instance image to obtain first feature information corresponding to each first instance image; performing feature fusion on the first feature information corresponding to each first instance image to obtain global feature information of the pathological image ([0096] process the selected feature information using convolution layer and combine outputs of layers to predict the region of 'selected feature information' in the medical image, so as to obtain the candidate region and the region type probability of each candidate region); determining a first predicted probability corresponding to each first instance image according to the global feature information and the first feature information corresponding to each first instance image, wherein the first predicted probability refers to a probability that the first instance image comprises the lesion region ([0093], [0096] determining region probabilities based on feature information and global feature map; [0204] wherein the probability indicates the probability of the region being a lesion region); and determining the candidate lesion region, based on a position of the first instance image corresponding to the first predicted probability that meets a first condition in the pathological image ([0096]-[0097] select candidate regions with a region type probability greater than the threshold as detection regions and obtain the location of the detection region; Fig. 11, [0203]-[0204] probability that a candidate region is the lesion region in the pathological image). Regarding claim 3, Zhou in view of Feng discloses the method according to claim 1 as applied above. Zhou further discloses wherein the determining lesion indication information of the pathological image, based on feature information extracted from the at least two second instance images comprises: performing feature encoding on each second instance image to obtain second feature information corresponding to each second instance image; performing feature fusion on the second feature information corresponding to each second instance image to obtain local feature information of the pathological image for the candidate lesion region ([0099] the ROIs and the feature maps output by the pyramid network (such as feature maps of medical images) are sent to the ROI pooling layer for pooling processing); and determining lesion probability distribution information of the pathological image, based on the local feature information, and global feature information of the pathological image, wherein the lesion probability distribution information indicates probability distribution of the lesion region in the pathological image; wherein the lesion indication information comprises the lesion probability distribution information ([0097]-[0099] select regions with a probability above a threshold to determine the location of the target region; Fig. 11, [0203]-[0204] probability distribution that candidate regions include a lesion region in the pathological image). Regarding claim 5, Zhou in view of Feng discloses the method according to claim 1 as applied above. Zhou further discloses wherein the sampling a pathological image by a first sampling way to obtain at least two first instance images comprises: partitioning a background of the pathological image, and determining a background image and a foreground image in the pathological image ([0103]-[0104] divide image into foreground and background regions using a preset detection model); segmenting the pathological image by the first sampling way to obtain at least two first candidate instance images ([0094] obtain candidate regions (i.e. first instance images); [0095]-[0096] using a candidate region network including convolutional layers); and determining a first candidate instance image comprising the foreground image from the at least two first candidate instance images as the first instance image ([0108] foreground regions are selected based on features as selected candidate regions). Regarding claim 6, Zhou in view of Feng discloses the method according to claim 1 as applied above. Zhou further discloses wherein the sampling the candidate lesion region by a second sampling way to obtain at least two second instance images comprises: extracting candidate lesion images from the pathological image according to the candidate lesion region ([0094], [0096] obtaining the candidate region based on candidate probabilities and regions); zooming the candidate lesion image, based on the size of the pathological image to obtain a target lesion image, wherein the size of the target lesion image is consistent with that of the pathological image ([0160], [0147], [0092] images can be scaled or cropped (i.e. zoomed) to create consistent sizes before further processing); and sampling the target lesion image by the second sampling way to obtain the at least two second instance images ([0098]-[0099] using a classification and regression network including ROI pooling and fully connected layers, wherein several regions of interest can be selected from the candidate regions (i.e. as second instance images)). Regarding claim 8, Zhou in view of Feng discloses everything claimed as applied above (see rejection of claim 1), and Zhou further discloses a computer device, comprising a processor and a memory, the memory storing a computer program therein, and the computer program being loaded and executed by the processor and causing the computer device to implement a method for determining a lesion region in a pathological image ([0257] a device including a processer, memory, and stored programs; [0059] a computer device for performing a method; [0057] method including detecting regions within a medical image and indicating information of the region; [0206] wherein the region is a lesion of a specific pathology). Regarding claims 9-10, 12-13, Zhou in view of Feng discloses everything claimed as applied above (see rejections of claims 2-3, 5-6, respectively). Regarding claim 15, Zhou in view of Feng discloses everything claimed as applied above (see rejection of claim 1), and Zhou further discloses a non-transitory computer-readable storage medium storing a computer program therein, the computer program being loaded and executed by a processor of a computer service and causing the computer device to implement a method for determining a lesion region in a pathological image ([0257] a non-volatile memory storing a program and executed by a processor of a computer device; [0057] method including detecting regions within a medical image and indicating information of the region; [0206] wherein the region is a lesion of a specific pathology). Regarding claims 16-17, 19-20, Zhou in view of Feng discloses everything claimed as applied above (see rejections of claims 2-3, 5-6, respectively). Allowable Subject Matter Claims 4, 7, 11, 14, 18 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: Regarding claim 4, Zhou in view of Feng discloses the method according to claim 1 as applied above. Zhou further discloses wherein the determining lesion indication information of the pathological image, based on feature information extracted from the at least two second instance images comprises: performing feature encoding on each second instance image to obtain second feature information corresponding to each second instance image; performing feature fusion on the second feature information corresponding to each second instance image to obtain local feature information of the pathological image for the candidate lesion region ([0099] the ROIs and the feature maps output by the pyramid network (such as feature maps of medical images) are sent to the ROI pooling layer for pooling processing). However, Zhou fails to disclose determining a second predicted probability corresponding to each second instance image according to the local feature information and the second feature information corresponding to each second instance image, wherein the second predicted probability refers to a probability that the second instance image comprises the lesion region; and determining lesion indication information of the pathological image, based on a position of the second instance image corresponding to the second predicted probability that meets a second condition in the pathological image. Similar reasoning applies to claims 11, 18. Regarding claim 7, Zhou in view of Feng discloses the method according to claim 1 as applied above. Zhou further discloses wherein the lesion indication information is obtained by a lesion region determination model, the lesion region determination model comprising an encoding network, a first classification network, a second classification network, and a third classification network (Fig. 3, [0086] post-training detection model, which includes a pyramid network, region networks, and classification regression networks; [0198] the post-training detection model may also include a candidate region network); wherein the encoding network is configured to perform feature encoding on the first instance image and the second instance image to obtain the first feature information corresponding to the first instance image and the second feature information corresponding to the second instance image ([0088] pyramid network processes feature information output from convolutional layers, generating a feature map); the first classification network is configured to determine the first predicted probability corresponding to each first instance image and the global feature information of the pathological image according to the first feature information corresponding to each first instance image ([0097] the candidate region network can be used to determine the regions of selected feature information in the medical image, thereby obtaining candidate regions and the region type probability of each candidate region); the third classification network is configured to determine the lesion probability distribution information of the pathological image according to the global feature information and the local feature information ([0098] a classification regression network can be used to select candidate regions with a region type probability greater than a threshold as detection regions (i.e. suspected lesion regions) and the location of the detection region can be obtained). However, Zhou fails to disclose wherein the second classification network is configured to determine the second predicted probability corresponding to each second instance image and the local feature information of the pathological image for the candidate lesion region according to the second feature information corresponding to each second instance image. Similar reasoning applies to claim 14. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to CAROLINE DEPALMA whose telephone number is (571)270-0769. The examiner can normally be reached Mon-Thurs 9:00am-4pm Eastern Time. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Moyer can be reached at 571-272-9523. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CAROLINE E. DEPALMA/Examiner, Art Unit 2675 /SJ Park/Primary Examiner, Art Unit 2675
Read full office action

Prosecution Timeline

Apr 19, 2024
Application Filed
Mar 09, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602777
APPARATUS AND METHOD FOR QUANTITATIVE ASSESSMENT OF MEDICAL IMAGES FOR DIAGNOSIS OF CHRONIC OBSTRUCTIVE PULMONARY DISEASE
2y 5m to grant Granted Apr 14, 2026
Patent 12586409
DETECTING EMOTIONAL STATE OF A USER BASED ON FACIAL APPEARANCE AND VISUAL PERCEPTION INFORMATION
2y 5m to grant Granted Mar 24, 2026
Patent 12586246
SYSTEM AND METHOD FOR VICARIOUS CALIBRATION OF OPTICAL DATA FROM SATELLITE SENSORS
2y 5m to grant Granted Mar 24, 2026
Patent 12573046
METHODS AND SYSTEMS FOR ANALYZING BRAIN LESIONS FOR THE DIAGNOSIS OF MULTIPLE SCLEROSIS
2y 5m to grant Granted Mar 10, 2026
Patent 12567226
METHOD AND DEVICE OF ACQUIRING FEATURE INFORMATION OF DETECTED OBJECT, APPARATUS AND MEDIUM
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
88%
Grant Probability
99%
With Interview (+15.6%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 42 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month