Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Response to Amendment
The Amendment filed 12/16/2025 has been entered. Claims 1-19 remain pending. Claim 20 is cancelled. Claim 21 is new.
Response to Arguments
Applicant’s arguments with respect to claims 1-19 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-8, 10-17, 19, and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Guo et al. (US 2015/0078640 A1) in view of Abdel-Aziz et al. (Generating Bézier curves for medical image reconstruction, published 2021).
Regarding Claim 1, Guo teaches “A region recognition method, comprising:
obtaining a scene image of a scene to-be-recognized” (Guo, [0040] discloses “The device 140 is configured to produce scanned images that each represent a cross section of the living body at one of multiple cross sectional (transverse) slices arranged along the axial direction of the body, which is oriented in the long dimension of the body.” Guo, [0049] discloses “In step 201, a region of interest (ROI) is obtained”; where a living body cross section is a scene to-be-recognized; where an ROI is a scene image);
“determining a specified boundary curve of the scene image” (Guo, [0055] discloses “In step 225, one or more initial boundary surfaces are determined in three or more dimensions”; where a boundary surface is a specified boundary curve),
“generating a contour map corresponding to the scene image according to the specified boundary curve” (Guo, [0111] discloses “in step 421 the values for the regional minimum at all voxels in the ROI are saved, e.g., stored on a computer-readable medium. If there is another regional minimum not yet processed, then the process above should be repeated for the next regional minimum, until topographical distances to all regional minima have been mapped and saved”; where determining topographical distances to regional minima is generating a contour map. Guo, [0145] discloses “FIG. 11A is a graph that illustrates an example contour map with an inner contour suitable for a brain tumor segmentation, according to an embodiment”); “and
determining a target region in the scene according to the contour map” (Guo, [0149] discloses “FIG. 12A is an image that illustrates an example initial boundary 1210 for a brain tumor in one slice 1201 of a MR scan, according to an embodiment. FIG. 12B is an image that illustrates an example refined double boundary 1220 and 1230 for a brain tumor in one slice 1210 of a MR scan, according to an embodiment”; where a refined double boundary is a target region).
PNG
media_image1.png
368
452
media_image1.png
Greyscale
Fig. 5 of Guo et al.
PNG
media_image2.png
400
426
media_image2.png
Greyscale
Fig. 11A of Guo
PNG
media_image3.png
657
395
media_image3.png
Greyscale
Fig. 12A and 12B of Guo
Guo does not explicitly teach “the specified boundary curve comprising a Bezier curve.”
However, in an analogous field of endeavor, Abdel-Aziz teaches “the specified boundary curve comprising a Bezier curve” (Abdel-Aziz, Section 1, paragraph 4 discloses “The proposed curves are considered as the first work that can use to develop Bézier curves for medical image representation.” See also Fig. 5).
PNG
media_image4.png
311
498
media_image4.png
Greyscale
Fig. 5 of Abdel-Aziz
It would have been obvious to one of ordinary skill in the art before the effective filing
date of the claimed invention to have modified Guo to incorporate the teachings of Abdel-Aziz by applying a Bézier curve to identify boundaries. One of ordinary skill in the art would be motivated to combine the Guo and Abdel-Aziz references at the explicit teaching of Abdel-Aziz; Abdel-Aziz, Section 1, paragraph 4 discloses “The advantage of these curves is that they are flexible in control points, torsion and curvature.” Thus, there is a teaching in the Abdel-Aziz reference to combine reference teachings. Guo teaches determining boundary surfaces and Abdel-Aziz teaches determining boundaries in the same field of endeavor using a particular method Bézier curves. There was a reasonable expectation of success, as shown by Fig. 5 of Abdel-Aziz where the curve is defined in Fig. 5 (b). Accordingly, the combination of Guo and Abdel-Aziz discloses the invention of Claim 1.
Regarding Claim 2, the combination of Guo and Abdel-Aziz teaches “The method according to claim 1, wherein generating the contour map corresponding to the scene image according to the specified boundary curve includes:
obtaining pixel coordinate information of discrete points on the specified boundary curve” (Guo, [0145] discloses “FIG. 11A is a graph that illustrates an example contour map with an inner contour suitable for a brain tumor segmentation, according to an embodiment. The horizontal axis 1102 indicates voxel location in one dimension, and the vertical axis 1104 indicates voxel location in a perpendicular dimension”);
“obtaining a plurality of contour lines according to target contour weight and the pixel coordinate information of each discrete point by calculation, wherein the target contour weight represents coordinate difference information between the plurality of contour lines” (Guo, [0090] discloses “The topographic force can further incorporate a weight function h, which can be defined as a function of distance d between the two contours (i.e., the shortest distance between a point on one contour to the other contour)”; where h is a target contour weight; where distance d between two contours is coordinate difference information); “and
rendering regions corresponding to the plurality of contour lines to obtain the contour map” (Guo, Fig. 12B shows the contour lines comprising the contour map output).
Regarding Claim 3, the combination of Guo and Abdel-Aziz o teaches “The method according to claim 2, further including determining the target contour weight according to scene features of the scene to-be-recognized, wherein:
obtaining the pixel coordinate information of the discrete points on the specified boundary curve includes:
performing discretization on the specified boundary curve according to the pixel coordinate information of specified contour points on the specified boundary curve to determine the discrete points” (Guo, [0087] discloses “In discrete case, the topographical distance between two voxels p and q is a weighted distance defined by Equation 9a
PNG
media_image5.png
61
234
media_image5.png
Greyscale
where the minimization is taken over all possible paths (p1=p, p2, . . . , pn =q) between two voxels p and q”; where voxels p and q of Equation 9a are discrete points; where solving equation 9a is performing discretization on the specified boundary curve); “and
obtaining the pixel coordinate information of the discrete points” (Guo, [0087] discloses “where the minimization is taken over all possible paths (p1=p, p2, . . . , pn =q) between two voxels p and q”; where p and q are pixel coordinate information of the discrete points. See also Guo, [0145] and Fig. 11A: “The horizontal axis 1102 indicates voxel location in one dimension, and the vertical axis 1104 indicates voxel location in a perpendicular dimension”); “and
obtaining the plurality of contour lines according to the target contour weight and the pixel coordinate information of each discrete point by calculation includes:
obtaining curve parameters of the specified boundary curve according to the pixel coordinate information of the discrete points” (Guo, [0087] discloses “where dist( ) denotes the Chamfer distance, and LS(x) is the lower slope at point x”; where Chamfer distance and lower slope of Equation 9b are curve parameters); “and
obtaining the plurality of contour lines by calculation according to the target contour line weight and the curve parameters” (Guo, [0087] discloses “In discrete case, the topographical distance between two voxels p and q is a weighted distance defined by Equation 9a
PNG
media_image5.png
61
234
media_image5.png
Greyscale
where the minimization is taken over all possible paths (p1=p, p2, . . . , pn =q) between two voxels p and q (See Meyer, F. 1994). The cost of walking from voxel p to voxel q is related to the lower slope, which is defined as the maximum slope linking p to any of its neighbors of lower altitude, given by Equation 9b
PNG
media_image6.png
82
424
media_image6.png
Greyscale
, where dist( ) denotes the Chamfer distance, and LS(x) is the lower slope at point x”; where weighted distance is target contour line weight; where Chamfer distance and lower slope of Equation 9b are curve parameters).
Regarding Claim 4, the combination of Guo and Abdel-Aziz teaches “The method according to claim 2, wherein rendering the regions corresponding to the plurality of contour lines to obtain the contour map includes:
determining a rendering mode corresponding to the target contour weight value” (Guo, [0091] discloses “FIG. 6 is a graph 600 that illustrates a separation weighting function h for the topographical effects, according to an embodiment. The horizontal axis 602 is distance d, the vertical axis 604 is multiplicative factor. The trace 610 indicates the weighting function h, which falls off substantially in the vicinity of dmax 612 and is zero by a separation threshold distance 608” and “FIG. 7B is a block diagram that illustrates the boundary voxels 721 on the first boundary C1 501 affected by the topographical effects of a second boundary C2 502, according to an embodiment. Voxels on C1 father than the separation threshold distance 608 are not affected by the boundary C2 502, and voxels at a distance d-max 612 are just barely affected”; where Fig. 7B shows rendering corresponding to the target contour weight value h); “and
according to a height value of each contour line of the plurality of contour lines and the rendering mode, rendering a region corresponding to each contour line to obtain the contour map” (Guo, [0090] discloses “The topographic force can further incorporate a weight function h, which can be defined as a function of distance d between the two contours (i.e., the shortest distance between a point on one contour to the other contour)”; where distance between two contours is a height value of each contour line).
Regarding Claim 5, the combination of Guo and Abdel-Aziz teaches “The method according to claim 4, further including:
in response to receiving adjustment information for the target contour weight of the contour map, obtaining adjusted contour weight” (Guo, [0062] discloses “In step 261, it is determined whether there are to be any manual edits. For example, the current boundary is presented to a user along with a graphical user interface, which the user can operate to indicate a manual edit to be performed”; where a positive determination that a manual edit is to be performed is receiving adjustment information); “and
according to a rendering mode corresponding to the adjusted contour weight, re-rendering the contour map to obtain an adjusted contour map” (Guo, [0062] discloses “If so, control passes to step 263 to receive commands from a user that indicate one or more manual edits to be performed. After manual editing, the edited boundary may be used directly or propagated to another subset or sent back for further refinement”; where returning for further refinement is re-rendering the contour map to obtain an adjusted contour map).
Regarding Claim 6, the combination of Guo and Abdel-Aziz teaches “The method according to claim 1, wherein determining the specified boundary curve of the scene image includes:
according to specified contour points, generating the specified boundary curve matching the scene image” (Guo, [0146] discloses “Find the intensity profile along the line i ϵ [1, . . . , N]. If there is a ring pattern, then determine the width Wi of the ring, and obtain the inner boundary point Pi. Connect all the inner boundary points to obtain a polygon. Use the polygon as an initial inner boundary. Once the initial inner boundary is found, the present invention provides two different strategies for refining and propagating the boundary”).
Regarding Claim 7, the combination of Guo and Abdel-Aziz teaches “The method according to claim 1, wherein determining the specified boundary curve of the scene image includes:
performing feature recognition on the scene image to obtain image features” (Guo, [0051] discloses “In step 203, tissue types in the ROI are classified, at least preliminarily. For example, it is specified that the target tissue is a particular organ or tumor therein, such as liver or brain or lung or lymph node”; where determining tissue type is feature recognition);
determining a region to-be-recognized according to the image features” (Guo, [0053] discloses “If it is determined in step 211 that the initial boundary is to be determined in three dimensions, then in step 220 a volume of interest (VOI) is determined. In the illustrated embodiment, the VOI is determined automatically based on the size of the ROI and the distance of the subset from the subset on which the ROI is defined, such as one slice”; where determining a VOI based on tissue pre-classified ROI is determining a region to-be-recognized according to image features); “and
determining a specified boundary curve of the region to-be-recognized” (Guo, [0145] discloses “FIG. 11A is a graph that illustrates an example contour map with an inner contour suitable for a brain tumor segmentation, according to an embodiment”; where inner contour is a specified boundary curve).
Regarding Claim 8, the combination of Guo and Abdel-Aziz teaches “The method according to claim 1, wherein determining the target region in the scene according to the contour map includes:
obtaining specified position information of a recognition frame corresponding to the object to be recognized” (Guo, [0096] discloses “In step 315, the inside marker, e.g. the regional minimum, is determined for each boundary. This can be achieved by eroding the initial contour, which can be formed manually in the first start frame or propagated from the previous frame”); “and
determining the target region of the object to be recognized in the scene to-be-recognized according to pixel coordinate information of the specified position information on the contour map” (Guo, [0149] discloses “FIG. 12A is an image that illustrates an example initial boundary 1210 for a brain tumor in one slice 1201 of a MR scan, according to an embodiment. FIG. 12B is an image that illustrates an example refined double boundary 1220 and 1230 for a brain tumor in one slice 1210 of a MR scan, according to an embodiment”; where a refined double boundary is a target region).
Regarding Claims 10-17, Claims 10-17 recite a device with elements corresponding to the steps recited in Claims 1-8, respectively. Therefore, the recited elements of this claim are mapped to the combination of Guo and Abdel-Aziz references in the same manner as the corresponding steps in its corresponding method claim. Finally, Guo discloses “An electronic device, comprising: one or more processors; and a memory coupled to the one or more processors and storing computer program instructions” (Guo, [0160] discloses “According to one embodiment of the invention, those techniques are performed by computer system 1300 in response to processor 1302 executing one or more sequences of one or more instructions contained in memory 1304. Such instructions, also called software and program code, may be read into memory 1304 from another computer-readable medium such as storage device 1308.”)
Regarding Claim 19, Claim 19 recites a computer-readable storage medium storing a program with instructions corresponding to the steps recited in Claim 1. Therefore, the recited programming instructions of this claim are mapped to the combination of Guo and Abdel-Aziz references in the same manner as the corresponding steps in its corresponding method claim. Finally, Guo discloses “A non-transitory computer readable storage medium containing computer program instructions that, when being executed, cause one or more processors to perform” (Guo, Claim 17, discloses “non-transitory computer-readable medium carrying one or more sequences of instructions, wherein execution of the one or more sequences of instructions by one or more processors causes the one or more processors to perform the steps”).
Regarding Claim 21, the combination of Guo and Abdel-Aziz teaches “The region recognition method according to claim 1, wherein the generating the contour map further comprises:
obtaining a plurality of discrete points on the Bezier curve” (Abdel-Aziz, Section 2, Definition 2.1 discloses “The Bézier curve r(s) of n degree with (n+1) control points is a parametric function”; where control points are a plurality of discrete points on the Bezier curve);
“obtaining a plurality of normal vectors for the plurality of discrete points, each of the plurality of normal vectors corresponding to a respective one of the plurality of discrete points” (Abdel-Aziz, Section 3, Theorem 3.1 discloses “In the light of the above calculations, we get the curvature as Eq. (3.4), and the principal normal vector field is given by
PNG
media_image7.png
223
654
media_image7.png
Greyscale
”; where a normal vector field is a plurality of normal vectors for the plurality of discrete points);
“calculating a plurality of contour lines based on the plurality of normal vectors” (Abdel-Aziz, Section 3, Theorem 3.1 discloses “In the light of the above calculations, we get the curvature as Eq. (3.4); where the curvature is a plurality of contour lines; see Theorem 3.1 and equation 3.4:
PNG
media_image8.png
557
1080
media_image8.png
Greyscale
); “and
rendering a plurality of regions corresponding to the plurality of contour lines to obtain the contour map” (Abdel-Aziz, Section 5 and Figs. 5-7 disclose “The output results are displayed in Figs. (5b, 6b, 7b). We prove that our studies provided new curves capable to use in many different areas in medical applications” ; where output curves are rendered plurality of regions corresponding to the plurality of contour lines). The proposed combination as well as the motivation for combining the Guo and Abdel-Aziz references presented in the rejection of Claim 1, apply to Claim 21 and are incorporated herein by reference. Thus, the apparatus recited in Claim 21 is met by Guo and Abdel-Aziz.
PNG
media_image9.png
830
388
media_image9.png
Greyscale
Fig. 5-7 of Abdel-Aziz
Claims 9 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Guo et al. (US 2015/0078640 A1), in view of Abdel-Aziz et al. (Generating Bézier curves for medical image reconstruction, published 2021), further in view of Egawa (JP 2000009820 A).
Regarding Claim 9, the combination of Guo and Abdel-Aziz does not explicitly teach the method of Claim 9.
However, in an analogous field of endeavor, Egawa teaches “The method according to claim 1, further including:
in response to a target object being located in the target region, determining distance information between the target object and a reference object in the target region according to the contour map” (Egawa, page 2, paragraph 6 discloses “The linear distance between the object terminals is calculated, and based on the distance data obtained by the terminal distance calculating means, the contour figure scale converting means reads the reference contour data or the contour update data, and enlarges or enlarges this data. After reducing the scale and converting the scale of the danger area coordinate data, update the outline data and danger area coordinates”); “and
according to the distance information, generating warning information indicating that the target object invades the target region” (Egawa, page 2, paragraph 6 discloses “After the contour graphic scale conversion, the contact determination means is made to determine whether or not there is any overlap of the danger area coordinate data of the moving object or the object, and the determination result is notified on a display or by voice.”)
It would have been obvious to one of ordinary skill in the art before the effective filing
date of the claimed invention to have modified Guo to incorporate the teachings of Egawa notifying by display when there is an area of overlap between regions. Both Guo and Egawa both directed to region recognition and identification, thus are in the same field of endeavor. Although the scale of the contours drawn in Guo and Egawa differs, the image processing techniques are both directed to determining contours. One of ordinary skill in the art would be motivated to combine the Guo and Egawa references in order to identify segments of boundaries affected by other boundaries; Guo, [0091] and Fig. 7B discloses “FIG. 7B is a block diagram that illustrates the boundary voxels 721 on the first boundary C1 501 affected by the topographical effects of a second boundary C2 502, according to an embodiment.” Egawa, page 2, paragraph 6 discloses determining any “overlap of the danger area coordinate,” thus Egawa is directed to determining safety using the contours. It would be obvious to one of ordinary skill in the art that determining if a boundary region overlaps with another boundary region would be applicable to the method of Guo. That is, a known technique may be applied to a known method ready for improvement to yield predictable results. Applying the technique of Egawa to the invention of Guo would produce the predictable results of displaying an alert when distinct contours overlap. Accordingly, the combination of Guo and Egawa discloses the invention of Claim 9.
Regarding Claim 18, Claim 18 recites a device with elements corresponding to the steps recited in Claim 9. Therefore, the recited elements of this claim are mapped to the proposed combination in the same manner as the corresponding steps in its corresponding method claim. Additionally, the rationale and motivation to combine the Guo and Egawa references, presented in rejection of Claim 1, apply to this claim. Finally, the combination of Guo and Egawa references discloses “An electronic device, comprising: one or more processors; and a memory coupled to the one or more processors and storing computer program instructions” (Guo, [0160] discloses “According to one embodiment of the invention, those techniques are performed by computer system 1300 in response to processor 1302 executing one or more sequences of one or more instructions contained in memory 1304. Such instructions, also called software and program code, may be read into memory 1304 from another computer-readable medium such as storage device 1308.”)
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CAROLINE TABANCAY DUFFY whose telephone number is (703)756-1859. The examiner can normally be reached Monday - Friday 8:00 am - 5:30 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amandeep Saini can be reached at 5712723382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CAROLINE TABANCAY DUFFY/Examiner, Art Unit 2662
/AMANDEEP SAINI/Supervisory Patent Examiner, Art Unit 2662