Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1, 7 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter.
Claims 1,7 are drawn to a “computer-implemented method (CIM)”, “computer program”, respectively, per se, therefore, fail(s) to fall within a statutory category of invention.
A claim directed to a computer program itself is non-statutory because it is not:
A process occurring as a result of executing the program, or
A machine programmed to operate in accordance with the program, or
A manufacture structurally and functionally interconnected with the program in a manner which enable the program to act as a computer component and realize its functionality, or
A composition of matter.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 7, 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over YAMABUKI et al. , JP 202409110, in view of Chen et al. (US 2013/0235157).
Regarding claims 1, 7, 13. YAMABUKI et al. , JP 2024091108, figs. 1-8, discloses a computer-implemented method (CIM) comprising: receiving a plurality of image frames, with each given image frame of the plurality of image frames including an image (system 100 includes a step of generating a grayscale monitoring image spanning a first predetermined number of frames in a time series and a second predetermined number of differential monitoring images obtained by taking the differences between the frames of the grayscale monitoring image from a time series of monitoring images captured in a monitoring area during welding), and with each given image frame of the plurality of image frames being organized in a chronological sequence (The processor executes the software program, generating grayscale monitoring images spanning a first predetermined number of frames that are sequential in time from monitoring images captured in a monitoring area during the welding, and generating a second predetermined number of difference monitoring images by taking differences between frames of the grayscale monitoring images); converting each given image of the plurality of images from its original image format into a grayscale image format to obtain a grayscale image frame sequence having a plurality of grayscale images (The image capture device 2 captures the surveillance image as a three-channel color image. The processor 11, which functions as the image processing unit 104, converts the three-channel color image into a one-channel grayscale image. This conversion results in a grayscale image spanning multiple frames in a time series).
for each given grayscale image of the grayscale image frame sequence, calculating a two-dimensional image entropy in order to obtain a set of image entropy data; clustering the set of image entropy data in order to obtain a plurality of clusters, with a number of clusters in the plurality of clusters being defined by a user; selecting grayscale image frames that are associated with a first cluster of the plurality of clusters; and performing a three-dimensional image reconstruction of the selected grayscale frames that are associated with the first cluster.
YAMABUKI et al., JP 2024091108, figs. 1-8, is silent about calculating a two-dimensional image, and/or performing a three-dimensional image.
Chen et al. (US 2013/0235157), figs. 5-8, discloses video content may be recorded in two-dimensional (2D) format or in three-dimensional (3D) format. In various applications such as, for example, the DVD movies and the digital TV, a 3D video is often desirable because it is often more realistic to viewers than the 2D counterpart. A 3D video comprises a left view video and a right view video. A 3D video frame may be produced by combining left view video components and right view video components, respectively.
It would have been obvious to the skilled in the art before the effective filing date of the invention to provide video content may be recorded in two-dimensional (2D) format or in three-dimensional (3D) format. In various applications such as, for example, the DVD movies and the digital TV, a 3D video is often desirable because it is often more realistic to viewers than the 2D counterpart. A 3D video comprises a left view video and a right view video. A 3D video frame may be produced by combining left view video components and right view video components, respectively, in YAMABUKI et al., as suggested by Chen et al., the motivation in order to have produced by combining left view video components and right view video components.
Therefore, the combination of YAMABUKI et al. , JP 2024091108, figs. 1-8, and Chen et al. (US 2013/0235157), figs. 5-8, discloses entropy in order to obtain a set of image entropy data; clustering the set of image entropy data in order to obtain a plurality of clusters, with a number of clusters in the plurality of clusters being defined by a user; selecting grayscale image frames that are associated with a first cluster of the plurality of clusters; and performing a three-dimensional image reconstruction of the selected grayscale frames that are associated with the first cluster (YAMABUKI et al. The system disclosed in the patent includes a welding tool for performing welding, the welding tool including a plurality of optical fibers arranged to receive electromagnetic radiation from a welding region, an optical signal receiving device that receives optical signals from the plurality of optical fibers during welding to generate optical signal information, and a computing device including one or more processors that are programmed with executable instructions and perform processing. The processing includes receiving optical signal information based on each optical signal received through the plurality of optical fibers during welding, comparing optical signal information corresponding to a first optical fiber of the plurality of optical fibers with optical signal information corresponding to a second optical fiber of the plurality of optical fibers, determining whether the weld is irregular based on the comparison, and performing at least one action based on whether the weld is irregular. Chen et al. (US 2013/0235157), figs. 5-8, video content may be recorded in two-dimensional (2D) format or in three-dimensional (3D) format. In various applications such as, for example, the DVD movies and the digital TV, a 3D video is often desirable because it is often more realistic to viewers than the 2D counterpart. A 3D video comprises a left view video and a right view video. A 3D video frame may be produced by combining left view video components and right view video components, respectively. Compressed video frames may be divided into groups of pictures (GOPs). For example, each GOP comprises one I-picture, several P-pictures and/or several B-pictures).
Claim(s) 2-6, 8-12, 14-18 is/are rejected under 35 U.S.C. 103 as being unpatentable over YAMABUKI et al. , JP 202409110, Chen et al. (US 2013/0235157), in view of Leung et al. (US 2013/0148851).
Regarding claims 2, 3, the combination of YAMABUKI et al. , JP 2024091108, figs. 1-8, and Chen et al. (US 2013/0235157), figs. 5-8, discloses the CIM of claim 1.
However, the combination of YAMABUKI et al. , JP 2024091108, figs. 1-8, and Chen et al. (US 2013/0235157), is silent about a gray value at a defined pixel position.
It would have been obvious to the skilled in the art before the effective filing date of the invention to provide calculation of the two-dimensional image entropy is based, at least in part, upon a distribution of gray pixels that are proximate to a defined gray value, in YAMABUKI et al. and Chen et al., as suggested by Leung et al., the motivation in order to determine connected groups of dark pixels below a certain gray value threshold.
Leung et al. (US 2013/0148851), discloses the current frame may be binarized at the detecting step 402 to determine connected groups of dark pixels below a certain gray value threshold. In this instance, the contour of each group of dark pixels is extracted, and those groups of pixels surrounded by four straight lines are marked as potential markers. Four corners of every potential marker are used to determine a homography in order to remove perspective distortion. Once the internal pattern of a calibration marker is brought to a canonical front view, a grid of N.times.N binary values are determined The binary values of the grid form a feature vector that is compared to the feature vector of the calibration marker pattern 298 by correlation. The output of the comparison is a confidence factor. If the confidence factor is greater than a pre-determined threshold, then the calibration marker pattern 298 is considered to be detected in the current frame at step 402.
Therefore, the combination of YAMABUKI et al. , JP 202409110, Chen et al. (US 2013/0235157), and Leung et al. (US 2013/0148851), discloses wherein the calculation of the two-dimensional image entropy is based, at least in part, upon a set of pixel density values; and/or the calculation of the two-dimensional image entropy is based, at least in part, upon a gray value at a defined pixel position (YAMABUKI et al ., annotating the smoke area by placing multiple rectangles, areas of a certain density or higher can be properly indicated by combining multiple rectangles, which can be used to detect welding smoke well, even though the background may affect the smoke depending on its density. Chen et al., video content may be recorded in two-dimensional (2D) format or in three-dimensional (3D) format. In various applications such as, for example, the DVD movies and the digital TV, a 3D video is often desirable because it is often more realistic to viewers than the 2D counterpart. A 3D video comprises a left view video and a right view video. A 3D video frame may be produced by combining left view video components and right view video components, respectively), and (see Lueng et al., the current frame may be binarized at the detecting step 402 to determine connected groups of dark pixels below a certain gray value threshold. In this instance, the contour of each group of dark pixels is extracted, and those groups of pixels surrounded by four straight lines are marked as potential markers. Four corners of every potential marker are used to determine a homography in order to remove perspective distortion. Once the internal pattern of a calibration marker is brought to a canonical front view, a grid of N.times.N binary values are determined The binary values of the grid form a feature vector that is compared to the feature vector of the calibration marker pattern 298 by correlation. The output of the comparison is a confidence factor. If the confidence factor is greater than a pre-determined threshold, then the calibration marker pattern 298 is considered to be detected in the current frame at step 402. [0113] In an alternative arrangement, instead of binarising the current frame using a fixed gray value threshold, at the detecting step 402, edge pixels may be detected using an edge detector. In this instance, the edge pixels are linked into segments, which in turn are grouped into quadrangles. The four corners of each quadrangle are used to determine a homography to remove the perspective distortion. An interior pattern is then sampled and compared to the feature vector of a known calibration marker pattern 298 by correlation. The calibration marker pattern 298 is considered to be found if the output of the comparison is greater than a pre-determined threshold).
Regarding claims 4, the combination of YAMABUKI et al. , JP 202409110, Chen et al. (US 2013/0235157), and Leung et al. (US 2013/0148851), discloses the CIM of claim 1 wherein the calculation of the two-dimensional image entropy is based, at least in part, upon a distribution of gray pixels that are proximate to a defined gray value (YAMABUKI et al ., annotating the smoke area by placing multiple rectangles, areas of a certain density or higher can be properly indicated by combining multiple rectangles, which can be used to detect welding smoke well, even though the background may affect the smoke depending on its density. Chen et al., video content may be recorded in two-dimensional (2D) format or in three-dimensional (3D) format. In various applications such as, for example, the DVD movies and the digital TV, a 3D video is often desirable because it is often more realistic to viewers than the 2D counterpart. A 3D video comprises a left view video and a right view video. A 3D video frame may be produced by combining left view video components and right view video components, respectively), and (see Lueng et al., the current frame may be binarized at the detecting step 402 to determine connected groups of dark pixels below a certain gray value threshold. In this instance, the contour of each group of dark pixels is extracted, and those groups of pixels surrounded by four straight lines are marked as potential markers. Four corners of every potential marker are used to determine a homography in order to remove perspective distortion. Once the internal pattern of a calibration marker is brought to a canonical front view, a grid of N.times.N binary values are determined The binary values of the grid form a feature vector that is compared to the feature vector of the calibration marker pattern 298 by correlation. The output of the comparison is a confidence factor. If the confidence factor is greater than a pre-determined threshold, then the calibration marker pattern 298 is considered to be detected in the current frame at step 402. [0113] In an alternative arrangement, instead of binarizing the current frame using a fixed gray value threshold, at the detecting step 402, edge pixels may be detected using an edge detector. In this instance, the edge pixels are linked into segments, which in turn are grouped into quadrangles. The four corners of each quadrangle are used to determine a homography to remove the perspective distortion. An interior pattern is then sampled and compared to the feature vector of a known calibration marker pattern 298 by correlation. The calibration marker pattern 298 is considered to be found if the output of the comparison is greater than a pre-determined threshold).
Regarding claims 5, the combination of YAMABUKI et al. , JP 202409110, Chen et al. (US 2013/0235157), and Leung et al. (US 2013/0148851), discloses the CIM of claim 1 wherein the plurality of image frames includes image frames that are taken from video data of multiple angles of a target object (YAMABUKI et al ., annotating the smoke area by placing multiple rectangles, areas of a certain density or higher can be properly indicated by combining multiple rectangles, which can be used to detect welding smoke well, even though the background may affect the smoke depending on its density. Chen et al., video content may be recorded in two-dimensional (2D) format or in three-dimensional (3D) format. In various applications such as, for example, the DVD movies and the digital TV, a 3D video is often desirable because it is often more realistic to viewers than the 2D counterpart. A 3D video comprises a left view video and a right view video. A 3D video frame may be produced by combining left view video components and right view video components, respectively), and (see Lueng et al., [0162] The methods described above may be used for product design. As an example, a textured cube with a calibrated marker pattern (e.g., the calibrated marker pattern 298) on one side of the cube. In this instance, a designer may firstly initialize an augmented reality (AR) system by looking at the calibrated marker pattern through a camera phone or a head mounted display (HMD) and moves the textured cube to a new location. A pair of keyframes may be determined for generating the initial map as described above. Computer graphics representing, for example, a photocopier may be superimposed into the images when viewed through the camera phone or head mounted display. The designer may move about or rotate the cube to inspect the design from different viewing angles and positions. Buttons on synthetic printers may then be selected to see a computer animation that simulates the operation of the photocopier in response to the button selections).
Regarding claims 6, the combination of YAMABUKI et al. , JP 202409110, Chen et al. (US 2013/0235157), and Leung et al. (US 2013/0148851), discloses the CIM of claim 1 further comprising: for each given grayscale image of the grayscale image frame sequence, calculating a one-dimensional entropy in order to obtain a set of image entropy values; and selecting the grayscale image that corresponds to a largest image entropy value (YAMABUKI et al ., annotating the smoke area by placing multiple rectangles, areas of a certain density or higher can be properly indicated by combining multiple rectangles, which can be used to detect welding smoke well, even though the background may affect the smoke depending on its density. Chen et al., video content may be recorded in two-dimensional (2D) format or in three-dimensional (3D) format. In various applications such as, for example, the DVD movies and the digital TV, a 3D video is often desirable because it is often more realistic to viewers than the 2D counterpart. A 3D video comprises a left view video and a right view video. A 3D video frame may be produced by combining left view video components and right view video components, respectively), and (see Lueng et al., the current frame may be binarized at the detecting step 402 to determine connected groups of dark pixels below a certain gray value threshold. In this instance, the contour of each group of dark pixels is extracted, and those groups of pixels surrounded by four straight lines are marked as potential markers. Four corners of every potential marker are used to determine a homography in order to remove perspective distortion. Once the internal pattern of a calibration marker is brought to a canonical front view, a grid of N.times.N binary values are determined The binary values of the grid form a feature vector that is compared to the feature vector of the calibration marker pattern 298 by correlation. The output of the comparison is a confidence factor. If the confidence factor is greater than a pre-determined threshold, then the calibration marker pattern 298 is considered to be detected in the current frame at step 402).
Regarding claims 8-12, 14-18, see rejection above of claims 2-6, respectively.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Van N Chow whose telephone number is (571)272-7590. The examiner can normally be reached M-F 10-6PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xiao Ke can be reached at 5712727776. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/VAN N CHOW/Primary Examiner, Art Unit 2627