Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Notice to Applicants
This communication is filed in response to the action filed on 02/28/2024.
Claims 1-20 are currently pending.
Information Disclosure Statement
The information disclosure statement (IDS) filed on 02/28/2024 has been considered.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-3, 8, 10-12, 14, 19 are rejected under 35 § U.S.C. 102(a)(2) as being anticipated by US 2024/0265707 A1 to PEPPOLONI et al. (hereinafter “PEPPOLONI”).
As per claim 1, PEPPOLONI discloses an apparatus comprising (a system and corresponding method of finding a ground plane of a vehicle; abstract; figs 1-3, and 12-13; paragraphs [0003], [0007], [0071-0073]): memory (the system comprising a computer having computing components including a memory; paragraphs [0173-0174]); instructions (the memory storing computing instructions to run a program and preform said methods; paragraphs [0173-0175]); an interface to receive video frames captured by a camera positioned on a vehicle (the system provides a computer acting as an interface in order to receive images captured my camera sensors, the sensors are stereo cameras which comprise a set of two cameras positioned on a vehicle; fig 1A-1B; paragraphs [0003], [0013], [0064-0066]); and programmable circuitry coupled to the interface (the computer acting as the interface is connected to the stereo camera and is further connected to computing components such as a computing processor; paragraphs [0172-0175]), wherein the programmable circuitry is to execute the instructions to at least (the computing processor is adapted to execute the instructions stored on the memory component in order to perform the method; paragraphs [0172-0175]): detect, based on the video frames, a ground plane of the vehicle (and the system is adapted to detect and identify a ground plane of the vehicle; figs 1B-2B; paragraphs [0066], [0071-0074]); determine, based on the ground plane, a first position parameter of the camera with respect to a coordinate system of the vehicle (the system based on the identified ground plane calculates a first and second position in relation to the vehicle and onboard stereo camera; figs 1B-2B, and 8A-B; paragraphs [0073], [0102-0105]); track a plurality of features between ones of the video frames (the system is adapted to track a plurality of road features including a CNN that is trainable to identify specific road features in the captured images including both a first and second image; figs 6A-B, and 7A-B; paragraphs [0008], [0093-0100]); and determine, based on the plurality of features, a second position parameter of the camera with respect to the coordinate system of the vehicle (the coordinate system defined by the vehicle and mounted stereo camera sensors and includes a second position found in relation to identified objects of interest and the first position in relation to vehicle position; fig 5A; paragraphs [0080], [0109-0111], [0116], [0144-0145], [0150], [0153]), the first position parameter different from the second position parameter (wherein the first and second positions are different positions tracked within the coordinate system (one parameter being pitch and the other parameter being yaw); paragraphs [0080], [0109-0111], [0116], [0144-0145], [0150], [0153], [0169]).
As per claim 2, PEPPOLONI discloses the apparatus of claim 1, wherein the first position parameter includes at least one of a roll angle, a pitch angle, or a z-axis coordinate of the camera with respect to the coordinate system of the vehicle (one of the pluralities of positional parameters tracked by the computing system of the vehicle include the pitch value of the vehicle; paragraphs [0073], [0102-0105], [0169]).
As per claim 3, PEPPOLONI discloses the apparatus of claim 1, wherein the second position parameter includes a yaw angle of the camera with respect to the coordinate system of the vehicle (one of the pluralities of positional parameters tracked by the computing system of the vehicle include the yaw value of the vehicle; paragraphs [0073], [0102-0105], [0169]).
As per claim 8, PEPPOLONI discloses the apparatus of claim 1, wherein the programmable circuitry is to execute the instructions to track the plurality of features in response to determining that the vehicle is moving (the computing system is adapted to track position features in relation to the vehicles movement and build a motion model by fusing the position information, the motion model used to predict and track the vehicles motion; paragraph [0080]).
As per claim 10, PEPPOLONI discloses a non-transitory computer readable medium comprising instructions that (a system and corresponding method of finding a ground plane of a vehicle the system comprising a computer having computing components including a memory the memory storing computing instructions to run a program and preform said methods; abstract; figs 1-3, and 12-13; paragraphs [0003], [0007], [0071-0073], [0173-0175]), when executed, cause programmable circuitry to at least (the computer acting as the interface is connected to the stereo camera and is further connected to computing components such as a computing processor the computing processor is adapted to execute the instructions stored on the memory component in order to perform the method; paragraphs [0172-0175]): detect, based on video frames captured by a camera positioned on a vehicle (the system provides a computer acting as an interface in order to receive images captured my camera sensors, the sensors are stereo cameras which comprise a set of two cameras positioned on a vehicle; fig 1A-1B; paragraphs [0003], [0013], [0064-0066]), a ground plane of the vehicle (and the system is adapted to detect and identify a ground plane of the vehicle; figs 1B-2B; paragraphs [0066], [0071-0074]); determine, based on the ground plane, a first position parameter of the camera with respect to a coordinate system of the vehicle (the system based on the identified ground plane calculates a first and second position in relation to the vehicle and onboard stereo camera; figs 1B-2B, and 8A-B; paragraphs [0073], [0102-0105]); track a plurality of features between ones of the video frames (the system is adapted to track a plurality of road features including a CNN that is trainable to identify specific road features in the captured images including both a first and second image; figs 6A-B, and 7A-B; paragraphs [0008], [0093-0100]); and determine, based on the plurality of features, a second position parameter of the camera with respect to the coordinate system of the vehicle (the coordinate system defined by the vehicle and mounted stereo camera sensors and includes a second position found in relation to identified objects of interest and the first position in relation to vehicle position; fig 5A; paragraphs [0080], [0109-0111], [0116], [0144-0145], [0150], [0153]), the first position parameter different from the second position parameter (wherein the first and second positions are different positions tracked within the coordinate system (one parameter being pitch and the other parameter being yaw); paragraphs [0080], [0109-0111], [0116], [0144-0145], [0150], [0153], [0169]).
As per claim 11, PEPPOLONI discloses the non-transitory computer readable medium of claim 10, wherein the first position parameter includes at least one of a roll angle, a pitch angle, or a z-axis coordinate of the camera with respect to the coordinate system of the vehicle (one of the pluralities of positional parameters tracked by the computing system of the vehicle include the pitch value of the vehicle; paragraphs [0073], [0102-0105], [0169]).
As per claim 12, PEPPOLONI discloses the non-transitory computer readable medium of claim 10, wherein the second position parameter includes a yaw angle of the camera with respect to the coordinate system of the vehicle (one of the pluralities of positional parameters tracked by the computing system of the vehicle include the yaw value of the vehicle; paragraphs [0073], [0102-0105], [0169]).
As per claim 14, PEPPOLONI discloses an apparatus comprising (a system and corresponding method of finding a ground plane of a vehicle the system comprising a computer having computing components including a memory the memory storing computing instructions to run a program and preform said methods; abstract; figs 1-3, and 12-13; paragraphs [0003], [0007], [0071-0073], [0173-0175]): plane fitting circuitry to detect (the computer (plane fitting circuit) acting as the interface is connected to the stereo camera and is further connected to computing components such as a computing processor the computing processor is adapted to execute the instructions stored on the memory component in order to perform the method; paragraphs [0172-0175]), based on video frames captured by a camera positioned on a vehicle (the system provides a computer acting as an interface in order to receive images captured my camera sensors, the sensors are stereo cameras which comprise a set of two cameras positioned on a vehicle; fig 1A-1B; paragraphs [0003], [0013], [0064-0066]), a ground plane of the vehicle (and the system is adapted to detect and identify a ground plane of the vehicle; figs 1B-2B; paragraphs [0066], [0071-0074]); transformation circuitry to determine, based on the ground plane, a first position parameter of the camera with respect to a coordinate system of the vehicle (the system based on the identified ground plane calculates a first and second position in relation to the vehicle and onboard stereo camera; figs 1B-2B, and 8A-B; paragraphs [0073], [0102-0105]); feature tracking circuitry to track a plurality of features between ones of the video frames (the system is adapted to track a plurality of road features including a CNN that is trainable to identify specific road features in the captured images including both a first and second image; figs 6A-B, and 7A-B; paragraphs [0008], [0093-0100]); and yaw estimation circuitry to determine, based on the plurality of features (one of the plurality of positional parameters tracked by the computing system of the vehicle include the yaw value of the vehicle; paragraphs [0073], [0102-0105], [0169]), a second position parameter of the camera with respect to the coordinate system of the vehicle (the coordinate system defined by the vehicle and mounted stereo camera sensors and includes a second position found in relation to identified objects of interest and the first position in relation to vehicle position; fig 5A; paragraphs [0080], [0109-0111], [0116], [0144-0145], [0150], [0153]), the first position parameter different from the second position parameter (wherein the first and second positions are different positions tracked within the coordinate system (one parameter being pitch and the other parameter being yaw); paragraphs [0080], [0109-0111], [0116], [0144-0145], [0150], [0153], [0169]).
As per claim 19, PEPPOLONI discloses the apparatus of claim 14, wherein the first position parameter includes at least one of a roll angle, a pitch angle, or a z-axis coordinate of the camera with respect to the coordinate system of the vehicle (one of the pluralities of positional parameters tracked by the computing system of the vehicle include the pitch value of the vehicle; paragraphs [0073], [0102-0105], [0169]).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or non-obviousness.
Claims 4, 13, and 20 are rejected under 35 § U.S.C. 103 as being obvious over US 2024/0265707 A1 to PEPPOLONI et al. (hereinafter “PEPPOLONI”)in view of US 10,964,059 B2 to BAMBER et al (hereinafter “BAMBER”).
As per claim 4, PEPPOLONI discloses the apparatus of claim 1. PEPPOLONI fails to disclose wherein the programmable circuitry is to execute the instructions to determine at least one of an x-axis coordinates or a y-axis coordinate of the camera with respect to the coordinate system of the vehicle based on a computer-aided design (CAD) model of the vehicle.
BAMBER discloses wherein the programmable circuitry is to execute the instructions to determine at least one of an x-axis coordinate or a y-axis coordinate of the camera with respect to the coordinate system of the vehicle based on a computer-aided design (CAD) model of the vehicle (during extrinsic calibration the camera on the vehicle converts coordinates in the image capture system reference frame (e.g. the coordinate system where the z axis points down the optical axis) to the coordinates in the car reference frame (including an x and y axis) in order to construct a full three dimensional model of the car to understand rotational forces acting on the vehicle and the representative vehicle model to determine orientation of the camera and corresponding vehicle around the x, y and z axes of the car; column 9, line 39 – column 10, line 2; column 10, lines 3-41).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify PEPPOLONI to have a computer-aided design (CAD) model of the vehicle of BAMBER reference. The Suggestion/motivation for doing so would have been to provide a model to determine orientation of the image capturing device and the car, helping determine orientation in the x, y and, z axis providing an accurate model to observe real life forces at a smaller scale as suggested by BAMBER column 10, lines 3-25. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine BAMBER with PEPPOLONI to obtain the invention as specified in claim 4.
As per claim 13, PEPPOLONI discloses the non-transitory computer readable medium of claim 10. PEPPOLONI fails to disclose wherein the instructions, when executed, cause the programmable circuitry to determine at least one of an x-axis coordinate or a y-axis coordinate of the camera with respect to the coordinate system of the vehicle based on a computer-aided design (CAD) model of the vehicle.
BAMBER discloses wherein the instructions, when executed, cause the programmable circuitry to determine at least one of an x-axis coordinate or a y-axis coordinate of the camera with respect to the coordinate system of the vehicle based on a computer-aided design (CAD) model of the vehicle (during extrinsic calibration the camera on the vehicle converts coordinates in the image capture system reference frame (e.g. the coordinate system where the z axis points down the optical axis) to the coordinates in the car reference frame (including an x and y axis) in order to construct a full three dimensional model of the car to understand rotational forces acting on the vehicle and the representative vehicle model to determine orientation of the camera and corresponding vehicle around the x, y and z axes of the car; column 9, line 39 – column 10, line 2; column 10, lines 3-41).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify PEPPOLONI to have a computer-aided design (CAD) model of the vehicle of BAMBER reference. The Suggestion/motivation for doing so would have been to provide a model to determine orientation of the image capturing device and the car, helping determine orientation in the x, y and, z axis providing an accurate model to observe real life forces at a smaller scale as suggested by BAMBER column 10, lines 3-25. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine BAMBER with PEPPOLONI to obtain the invention as specified in claim 13.
As per claim 20, PEPPOLONI discloses the apparatus of claim 14. PEPPOLONI fails to disclose further including model analysis circuitry to determine at least one of an x-axis coordinate or a y-axis coordinate of the camera with respect to the coordinate system of the vehicle based on a computer-aided design (CAD) model of the vehicle.
BAMBER discloses further including model analysis circuitry to determine at least one of an x-axis coordinate or a y-axis coordinate of the camera with respect to the coordinate system of the vehicle based on a computer-aided design (CAD) model of the vehicle (during extrinsic calibration the camera on the vehicle converts coordinates in the image capture system reference frame (e.g. the coordinate system where the z axis points down the optical axis) to the coordinates in the car reference frame (including an x and y axis) in order to construct a full three dimensional model of the car to understand rotational forces acting on the vehicle and the representative vehicle model to determine orientation of the camera and corresponding vehicle around the x, y and z axes of the car; column 9, line 39 – column 10, line 2; column 10, lines 3-41).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify PEPPOLONI to have a computer-aided design (CAD) model of the vehicle of BAMBER reference. The Suggestion/motivation for doing so would have been to provide a model to determine orientation of the image capturing device and the car, helping determine orientation in the x, y and, z axis providing an accurate model to observe real life forces at a smaller scale as suggested by BAMBER column 10, lines 3-25. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine BAMBER with PEPPOLONI to obtain the invention as specified in claim 20.
Claims 5 and 15 are rejected under 35 § U.S.C. 103 as being obvious over US 2024/0265707 A1 to PEPPOLONI et al. (hereinafter “PEPPOLONI”) in view of US 2015/0178573 A1 to VISWANATH et al (hereinafter “VISWANATH”).
As per claim 5, PEPPOLONI discloses the apparatus of claim 1 wherein the programmable circuitry is to execute the instructions to: access point cloud data corresponding to at least one of the video frames (the computing system includes access to 3D point cloud data for the captured video frames; paragraphs [0135-0136], [0139]). PEPPOLONI fails to disclose and detect the ground plane by executing a random sampling and consensus (RANSAC) algorithm based on the point cloud data.
VISWANATH discloses; and detect the ground plane by executing a random sampling and consensus (RANSAC) algorithm based on the point cloud data (the system utilizes a RANSAC algorithm in order to select the best homograph matrix of the point cloud data; paragraph [0035]).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify PEPPOLONI to have detect the ground plane by executing a random sampling and consensus (RANSAC) algorithm based on the point cloud data of VISWANATH reference. The Suggestion/motivation for doing so would have been to provide a majority/most of the feature points obtained are on the ground plane as suggested by VISWANATH paragraph [0035]. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine VISWANATH with PEPPOLONI to obtain the invention as specified in claim 5.
As per claim 15, PEPPOLONI discloses the apparatus of claim 14 wherein the plane fitting circuitry is to: access point cloud data corresponding to at least one of the video frames (the computing system includes access to 3D point cloud data for the captured video frames; paragraphs [0135-0136], [0139]). PEPPOLONI fails to disclose and detect the ground plane by executing a random sampling and consensus (RANSAC) algorithm based on the point cloud data.
VISWANATH discloses and detect the ground plane by executing a random sampling and consensus (RANSAC) algorithm based on the point cloud data (the system utilizes a RANSAC algorithm in order to select the best homograph matrix of the point cloud data; paragraph [0035]).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify PEPPOLONI to have detect the ground plane by executing a random sampling and consensus (RANSAC) algorithm based on the point cloud data of VISWANATH reference. The Suggestion/motivation for doing so would have been to provide a majority/most of the feature points obtained are on the ground plane as suggested by VISWANATH paragraph [0035]. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine VISWANATH with PEPPOLONI to obtain the invention as specified in claim 15.
Claims 6-7, and 16-18 are rejected under 35 § U.S.C. 103 as being obvious over US 2024/0265707 A1 to PEPPOLONI et al. (hereinafter “PEPPOLONI”) in view of US 2021/0207977 A1 to LEE (hereinafter “LEE”).
As per claim 6, PEPPOLONI discloses the apparatus of claim 1. PEPPOLONI fails to disclose [wherein the programmable circuitry is to: estimate a camera path of the camera based on the plurality of features; and determine the second position parameter by comparing the camera path to a vehicle path of the vehicle.
LEE discloses wherein the programmable circuitry is to: estimate a camera path of the camera based on the plurality of features (the computing system is adapted to find estimated positions of the vehicle V and the camera C based on the position features provided; fig 11; paragraphs [0069], [0098-0100]); and determine the second position parameter by comparing the camera path to a vehicle path of the vehicle (and the computing system is further adapted to compare the estimated vehicle path with the estimated camera position and compare the two in order to verify the estimated positions are valid; fig 11; paragraphs [0069], [0098-0100]).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify PEPPOLONI to have comparing the camera path to a vehicle path of the vehicle of LEE reference. The Suggestion/motivation for doing so would have been to provide position estimation of the vehicle and camera sensors as suggested by paragraph [0098] of LEE. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine LEE with PEPPOLONI to obtain the invention as specified in claim 6.
As per claim 7, PEPPOLONI in view of LEE discloses the apparatus of claim 6. Modified PEPPOLONI further discloses wherein the programmable circuitry is to execute the instructions to estimate the camera path in response to a count of the plurality of features satisfying a threshold (the system is adapted to include a threshold which may be set (programed) in order to determine a distance feature from the camera and sets a threshold distance to switch from stereo image to mono images to maintain image accuracy; paragraph [0082]).
As per claim 16, PEPPOLONI discloses the apparatus of claim 14. PEPPOLONI fails to disclose wherein the yaw estimation circuitry is to: estimate a camera path of the camera based on the plurality of features; and determine the second position parameter by comparing the camera path to a vehicle path of the vehicle.
LEE discloses wherein the yaw estimation circuitry is to: estimate a camera path of the camera based on the plurality of features (the computing system is adapted to find estimated positions of the vehicle V and the camera C based on the position features provided; fig 11; paragraphs [0069], [0098-0100]); and determine the second position parameter by comparing the camera path to a vehicle path of the vehicle (and the computing system is further adapted to compare the estimated vehicle path with the estimated camera position and compare the two in order to verify the estimated positions are valid; fig 11; paragraphs [0069], [0098-0100]).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify PEPPOLONI to have comparing the camera path to a vehicle path of the vehicle of LEE reference. The Suggestion/motivation for doing so would have been to provide position estimation of the vehicle and camera sensors as suggested by paragraph [0098] of LEE. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine LEE with PEPPOLONI to obtain the invention as specified in claim 16.
As per claim 17, PEPPOLONI in view of LEE discloses the apparatus of claim 16. Modified PEPPOLONI further discloses wherein the yaw estimation circuitry is to estimate the camera path in response to a count of the plurality of features satisfying a threshold (the system is adapted to include a threshold which may be set (programed) in order to determine a distance feature from the camera and sets a threshold distance to switch from stereo image to mono images to maintain image accuracy; paragraph [0082]).
As per claim 18, PEPPOLONI in view of LEE discloses the apparatus of claim 16. Modified PEPPOLONI further discloses wherein the second position parameter includes a yaw angle of the camera with respect to the coordinate system of the vehicle (one of the pluralities of positional parameters tracked by the computing system of the vehicle include the yaw value of the vehicle; paragraphs [0073], [0102-0105], [0169]).
Claim 9 is rejected under 35 § U.S.C. 103 as being obvious over US 2024/0265707 A1 to PEPPOLONI et al. (hereinafter “PEPPOLONI”) in view of US 2024/0013555 A1 to PHAN et al. (hereinafter “PHAN”).
As per claim 9, PEPPOLONI discloses the apparatus of claim 1. PEPPOLONI fails to disclose wherein the programmable circuitry is to execute the instructions to detect the ground plane when the vehicle is stationary.
PHAN discloses wherein the programmable circuitry is to execute the instructions to detect the ground plane when the vehicle is stationary (the method of detecting the ground plane is performed when the vehicle and system resident in the vehicle are stationary; paragraph [0005]).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify PEPPOLONI to have detect the ground plane when the vehicle is stationary of PHAN reference. The Suggestion/motivation for doing so would have been to provide the advantage of having a more accurate ground plane estimation by performing a calibration in real time because reliance on preexisting calibrations can result in less accurate camera position estimation results as suggested by PHAN at paragraph [0005]. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine PHAN with PEPPOLONI to obtain the invention as specified in claim 9.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. These prior arts include the following:
US 10,719,955 B2
US 2024/0133908 A1
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DEVIN JACOB DHOOGE whose telephone number is (571) 270-0999. The examiner can normally be reached 7:30-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Bee can be reached on (571) 270-5183. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800- 786-9199 (IN USA OR CANADA) or 571-272-1000.
/Devin Dhooge/
USPTO Patent Examiner
Art Unit 2677
/ANDREW W BEE/Supervisory Patent Examiner, Art Unit 2677