DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of claims: claims 1-20 are examined below.
Response to Arguments
Applicant's arguments filed 3/6/2026 have been fully considered but they are not persuasive.
Applicant’s remark – (pages 7-8) Applicant argued the lack of teaching of the newly claim amendment. Please see Remarks for detail.
Examiner response – Examiner respectfully disagree. An updated search due to new claim amendment found Civin et al (US 2015/0064153) teaches the new claim amendment in figure 9 and 0132, 0155 and 0160-163, and figure 16 and paragraph 0150. Please see the Office Action below for further detail.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Li et al (US 2021/0100524) and Civin et al (US 2015/0064153).
Claim 1:
Li et al (US 2021/0100524) anticipated the follow subject matter:
A method comprising: receiving a plurality of microfluidic images of fluid flow within a fluid channel at a plurality of times (0005 teaches sequencing imaging of blood flow in blood vessel (boundary, constraints) and 0055 teaches pixel for blood vessel region, location and spatial location; figures 7-8 and paragraph 0022-0023 teaches vector flow estimate from images to measure flow velocity magnitude and angle; figure 11 and 0026 teach vector flow images from vivo human acquisition; 0082 detail use of Doppler flow imaging at different slow time dimension with video and estimating optical flow data sets, RF data acquired using different image angles at different images of same blood vessels);
analyzing, with a machine learning model, the plurality of microfluidic images to predict at least two fields for predicted fluid flow within the fluid channel, the at least two fields selected from the group consisting of a velocity field, a pressure field, and/or a stress field (0003 teaches using deep learning neural network for estimate blood flow velocity in 3D, where 0005 detail 2D velocity measurement as well; 0012-0013 detail the same for 2D and 3D using neural network, where 0014 teaches predict and measure of flow patterns, abnormal flow, flow velocity; figure 7-8 and paragraph 0022-0023 teaches vector flow estimate result from simulated data, use for prediction, where the vector flow will show velocity field and pressure/stress with velocity magnitude);
calculating a loss measure for the at least two fields based on at least two of physical fluid flow constraints, boundary condition constraints for fluid flow within the fluid channel, and data mismatch constraints between the predicted fluid flow and the plurality of microfluidic images (above in 0005 and 0055 teaches blood (physical fluid) vessel region, boundary, constraints (flow constraints, boundary condition, fluid channel), where 0074-0079 detail the use of neural network for loss value (from loss function) to calculate between difference (mismatch) between predicted flow velocities and the ground truth in vessel regions (boundary condition constraints of fluid channel)); and
updating the machine learning model based on the loss measure (0061 teaches neural network with adaptive training process, and 0074-0076 teaches training neural network with loss function (paragraph 0076 teaches loss measure using a function of a summation of 5 terms), where training is view as updating the machine learning model).
Li et al do not teach the following:
including a shear stress field and at least one additional field and microfluid.
Civin et al (US 2015/0064153) teaches: including a shear stress field and at least one additional field (figure 9 and 0132 teaches measurement of velocity and pressure fields from the camera at capture speed, where 0155 and 0160 detail shear stress from flow rate from fluid stream through gaps and post; paragraph 0136 detail application to pressure sensitive for microfluid) and microfluid (figure 16 and 0150 detail microfluid time lapse images related to blood).
Li et al and Civin et al are both in the field of image analysis, especially using images to measure and predicting velocimetry such that the combine outcome is predictable.
Therefore it would have been obvious to one having ordinary skill before the effective filing date to modify Li et al by Civin et al where knowing such different fields assist in preventing transfusion reactions as well as reduce fluid volume loads such as toxic effect to patients as disclosed by Civin et al in 0163.
Claim 2:
Li et al teach:
The method of claim 1, wherein the at least two fields are two-dimensional fields for the predicted fluid flow (0005 teaches obtaining 2D velocity measurement of blood flow (fluid) with intensive estimators (predict); 0066-0068 teaches flow velocity field in 2D space).
Claim 3:
Li et al teach:
The method of claim 1, wherein the at least two fields are three-dimensional fields for the predicted fluid flow (0003 teaches obtaining 3D velocity measurement of blood flow (fluid) with intensive estimators (predict) using deep learning neural network).
Claim 4:
Li et al teach:
The method of claim 1, wherein the boundary condition constraints include a boundary condition measure computed based on compliance of the predicted fluid flow with a predetermined boundary condition (0074-0079 detail the use of neural network to calculate between difference (mismatch) between predicted flow velocities and the ground truth in vessel regions (boundary condition constraints of fluid channel), where range of difference determine the compliance).
Claim 5:
Li et al teach:
The method of claim 4, wherein the predetermined boundary condition is selected from the group consisting of a slip boundary condition and a non-slip boundary condition (figure 7-8 and 0022-0023 teaches use of ground truth with dashed lines and estimated (predict) in solid lines for slip and non-slip).
Claim 6:
Li et al teach:
The method of claim 1, wherein the physical fluid flow constraints include a physical conservation measure computed to measure compliance of the predicted fluid flow with fluid dynamic flow constraints (figures 3-4 and 0057 teaches optical flow rate with RF data from Doppler acquisition (physical conservation) to neural network prediction fluid flow; paragraph 0074-0082 detail further optical flow imaging well documented literature and video optical flow data set).
Claim 7:
Li et al teach:
The method of claim 6, wherein the fluid dynamic flow constraints include an optical flow constraint (abstract teaches blood flow dynamic with temporal hemodynamic flow velocity profile in multiple blood flow dimensions at a given spatial location; 0081-0082, where optical flow with variable rates (dynamic flow) dependent on location of blood vessel interior and edge).
Claim 8:
Li et al teach:
The method of claim 6, wherein the physical conservation measure is computed at a predetermined set of coordinates within the predicted fluid flow (above and 0074-0082 detail further optical flow imaging well documented literature and ground truth, all view as predetermined set of coordinates, where above teaches use of neural network for prediction).
Claim 9:
Li et al teach:
The method of claim 1, wherein the machine learning model is a fully- connected neural network (0006 and above teaches deep neural network which is a fully connected neural network).
Claim 10:
Li et al teach: The method of claim 1, wherein the microfluidic images are two-dimensional images of the fluid channel (above teaches image of blood (microfluid) vessel in 2D (2D images) in blood vessel (fluid channel) region, location and spatial location (paragraph 0055)). Civin et al teaches microfluid images of plurality of (figure 16 and 0150 detail microfluid time lapse images related to blood).
Claim 11:
Li et al teach: The method of claim 1, wherein the microfluidic images are three-dimensional images of the fluid channel (0003 teaches obtaining 3D velocity measurement of blood flow (fluid) with intensive estimators (predict) using deep learning neural network). Civin et al teaches microfluid images of plurality of (figure 16 and 0150 detail microfluid time lapse images related to blood).
Claim 12:
Li et al teach: The method of claim 1, wherein the microfluidic images are successive images captured by a video camera (0031 teaches produce videos of Doppler vector flow; paragraph 0071 teaches consecutive Doppler frames where step five detail use of video of vector flow velocity). Civin et al teaches microfluid images of plurality of (figure 16 and 0150 detail microfluid time lapse images related to blood).
Claim 13:
Li et al teach: The method of claim 1, wherein the fluid is blood and the microfluidic images depict at least one of individual blood vessels and/or individual platelets (figure 11 and paragraph 0026 teaches vivo imaging result where figure 11B provided to show vessel location (one vessel); 0055 teaches using of binary mask on power Doppler image for blood vessel region with RF to further suppression of nose outside the blood vessel). Civin et al teaches microfluid images of plurality of (figure 16 and 0150 detail microfluid time lapse images related to blood).
Claim 14:
Li et al (US 2021/0100524) teach the follow subject matter:
A system comprising:
a processor (0015 and 0027 teaches use of processor); and
a memory storing instructions which, when executed by the processor, cause the processor to (paragraph 0015 and 0027):
receive a plurality of microfluidic images of fluid flow within a fluid channel at a plurality of times (0005 teaches sequencing imaging of blood flow in blood vessel (boundary, constraints) and 0055 teaches pixel for blood vessel region, location and spatial location; figures 7-8 and paragraph 0022-0023 teaches vector flow estimate from images to measure flow velocity magnitude and angle; figure 11 and 0026 teach vector flow images from vivo human acquisition; 0082 detail use of Doppler flow imaging at different slow time dimension with video and estimating optical flow data sets, RF data acquired using different image angles at different images of same blood vessels);
analyze, with a machine learning model, the plurality of microfluidic images to predict at least two fields for predicted fluid flow within the fluid channel, the at least two fields selected from the group consisting of a velocity field, a pressure field, and/or a stress field (0003 teaches using deep learning neural network for estimate blood flow velocity in 3D, where 0005 detail 2D velocity measurement as well; 0012-0013 detail the same for 2D and 3D using neural network, where 0014 teaches predict and measure of flow patterns, abnormal flow, flow velocity; figure 7-8 and paragraph 0022-0023 teaches vector flow estimate result from simulated data, use for prediction, where the vector flow will show velocity field and pressure/stress with velocity magnitude);
calculate a loss measure for the at least two fields based on at least two of physical fluid flow constraints, boundary condition constraints for fluid flow within the fluid channel, and data mismatch constraints between the predicted fluid flow and the plurality of microfluidic images (above in 0005 and 0055 teaches blood (physical fluid) vessel region, boundary, constraints (flow constraints, boundary condition, fluid channel), where 0074-0079 detail the use of neural network for loss value (from loss function) to calculate between difference (mismatch) between predicted flow velocities and the ground truth in vessel regions (boundary condition constraints of fluid channel)); and
update the machine learning model based on the loss value (0061 teaches neural network with adaptive training process, and 0074-0076 teaches training neural network with loss function (paragraph 0076), where training is view as updating the machine learning model).
Li et al do not teach the following:
including a shear stress field and at least one additional field and microfluid.
Civin et al (US 2015/0064153) teaches: including a shear stress field and at least one additional field (figure 9 and 0132 teaches measurement of velocity and pressure fields from the camera at capture speed, where 0155 and 0160 detail shear stress from flow rate from fluid stream through gaps and post; paragraph 0136 detail application to pressure sensitive for microfluid) and microfluid (figure 16 and 0150 detail microfluid time lapse images related to blood).
Li et al and Civin et al are both in the field of image analysis, especially using images to measure and predicting velocimetry such that the combine outcome is predictable.
Therefore it would have been obvious to one having ordinary skill before the effective filing date to modify Li et al by Civin et al where knowing such different fields assist in preventing transfusion reactions as well as reduce fluid volume loads such as toxic effect to patients as disclosed by Civin et al in 0163.
Claim 15:
Li et al teach:
The system of claim 14, wherein the at least two fields are two-dimensional fields for the predicted fluid flow (0005 teaches obtaining 2D velocity measurement of blood flow (fluid) with intensive estimators (predict); 0066-0068 teaches flow velocity field in 2D space).
Claim 16:
Li et al teach:
The system of claim 14, wherein the at least two fields are three-dimensional fields for the predicted fluid flow (0003 teaches obtaining 3D velocity measurement of blood flow (fluid) with intensive estimators (predict) using deep learning neural network).
Claim 17:
Li et al teach:
The system of claim 14, wherein the boundary condition constraints include a boundary condition measure computed based on compliance of the predicted fluid flow with a predetermined boundary condition (0074-0079 detail the use of neural network to calculate between difference (mismatch) between predicted flow velocities and the ground truth in vessel regions (boundary condition constraints of fluid channel), where range of difference determine the compliance).
Claim 18:
Li et al teach:
The system of claim 17, wherein the predetermined boundary condition is selected from the group consisting of a slip boundary condition and a non-slip boundary condition (figure 7-8 and 0022-0023 teaches use of ground truth with dashed lines and estimated (predict) in solid lines for slip and non-slip).
Claim 19:
Li et al teach:
The system of claim 14, wherein the physical fluid flow constraints include a physical conservation measure computed based on compliance of the predicted fluid flow with fluid dynamic flow constraints (figures 3-4 and 0057 teaches optical flow rate with RF data from Doppler acquisition (physical conservation) to neural network prediction fluid flow; paragraph 0074-0082 detail further optical flow imaging well documented literature and video optical flow data set).
Claim 20:
Li et al teach:
The system of claim 19, wherein the fluid dynamic flow constraints include an optical flow constraint (abstract teaches blood flow dynamic with temporal hemodynamic flow velocity profile in multiple blood flow dimensions at a given spatial location; 0081-0082, where optical flow with variable rates (dynamic flow) dependent on location of blood vessel interior and edge).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Peña Monferrer et al (US 2022/017840) teaches CONTROLLING A MULTIPHASE FLOW - A processor provides the images to a neural network adapted to determine a distribution of a spatial property of the plurality of particles from the provided images. A processor determines the distribution of the spatial property of the plurality of particles in the multiphase flow, based on the provided images, using the neural network (abstract).
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to TSUNG-YIN TSAI whose telephone number is (571)270-1671. The examiner can normally be reached 7am-4pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bhavesh Mehta can be reached at (571) 272-7453. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/TSUNG YIN TSAI/Primary Examiner, Art Unit 2656