Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-18 are rejected under 35 U.S.C. 101 because the claimed invention is
directed to an abstract idea without reciting elements that amount to significant more than
the abstract idea. The rationale for this rejection, under MPEP § 2106, for this finding is
explained below.
Step 1: Under step 1, the claims are analyzed to determine if the claim is directed to a
process, machine, article of manufacture, or composition of matter. For the claims in question,
claims 1-9 are directed towards a process and claims 10-18 are directed towards a machine.
Step 2A, Prong 1: Under step 2A, prong 1, the claims are evaluated to determine if the
claim recites a judicial exception, which includes the laws of nature, physical phenomena, or an
abstract idea. For independent claim 1 (and corresponding independent claim 10), all limitations present are directed towards a mental process. Using Applicant’s Fig. 3 as an example, an individual can look at the image, mentally define a region of interest (i.e., a 3D image mask which has an associated “depth information” based on how visually deep the region is in the image) such as the door way present in Fig. 3. An individual then furthermore detect an object (i.e., Person O2(O) and O1(O)) in the image, and based on the context of the information determine a “depth information” associated with that object and compare that to the region of interest (i.e., person O1(O) is present deeper in the image compared to the mask, and therefore is inside the building) in order to come to a conclusion regarding where the individual is present in the image.
Step 2A, Prong 2: Under step 2A, prong 2, the claims are evaluated to determine
whether the claim as a whole integrates the recited judicial exception into a practical application
of the exception (see MPEP 2106.04(d)). The examiner notes that MPEP 2106.05(a) -(c) and (e)
generally concern limitations that are indicative of integration, whereas 2106.05(f)-(h) generally
concern limitations that are not indicative of integration.
In regards to independent claims 1 and 10, all the limitations of the claim are directed towards a mental process as described above. The additional limitation of “an image surveillance apparatus having an image receiver and operation processor” are mere instructions for implementing an abstract idea using a computer, and does not constitute integration into a practical application or significantly more (see MPEP 2106.05(f)).
In regards to dependent claims 2-9 and 11-18, the additional limitations either further disclose abstract ideas (e.g., “computing an object distance” in claim 4 is directed towards a mathematical concept, “utilizing object identification technology to acquire height information of the target object” in claim 7 is directed towards a mental process (i.e., an individual can look at an image and determine some estimate for the height of an object)), or recited additional limitations which are not indicative of integration into a practical application (see MPEP 2016.05(f)-(h)).
The examiner emphasizes MPEP 2106.05(a), which states that a limitation is indicative of integration into a practical application if the limitation identifies a manner in which an improvement is explicitly and specifically achieved and recited in the claims. The current claim language all are recited at a high level of generality which do not serve to integrate the limitations in view of MPEP 2106.05(f), and furthermore nothing precludes the current limitations from being interpreted under the mental processes grouping.
Step 2B: Under step 2B, the claims are evaluated as a whole to determine if it amounts to
significantly more than the recited exception (i.e., whether any additional element, or
combination of additional elements, adds an inventive concept to the claim). The considerations
of step 2A, prong 2 and step 2B overlap, but differ in that 2B also requires considering the claim
as a whole/combination of limitations, and with reference to MPEP 2106.05(d) whether the
claims feature any “specific limitation(s) other than what is well - understood, routine,
conventional activity in the field” (WURC). The Examiner asserts that, even when considered in combination, the additional elements of claims 1-18 represent mere instruction for utilizing a computing device to perform a process at a high level of generality which otherwise can be performed mentally, and therefore does not provide a specifically recited inventive concept.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1 and 10 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Wynne et al. (TW 202305646A; hereinafter “Wynne”).
Regarding Claim 1, Wynne discloses a 3D image mask analysis method applied to an image surveillance apparatus having an image receiver and an operation processor, the 3D image mask analysis method comprising (see [0004], [0022]):
the operation processor setting a 3D image mask with first depth information inside a surveillance image acquired by the image receiver (Fig. 1b, [0025], Wynne discloses obtaining a surveillance image, wherein portions of a public area are masked by a mask defined in three dimensions.);
the operation processor utilizing an object identification technology to acquire second depth information of a target object at least partly overlapped with the 3D image mask inside the surveillance image (Fig. 2, [0026-0027], Wynne discloses an individual 14 that is located in an area 13b overlapping with three-dimensional mask 8b, and consequently depth information associated to individual 14.); and
the operation processor comparing the first depth information with the second depth information to determine whether an image of the target object is displayed inside the surveillance image in accordance with a comparison result ([0027], Wynne discloses determining a difference in depth/distance information (i.e., comparing) between an individual 14 and occlusion areas 8a/8b (i.e., three-dimensional masks), and consequently displaying an image of the individual 14.).
Claim 10 is the apparatus claim corresponding to claim 1, and is similarly rejected (see [0004], [0022]).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 2 and 11 are rejected as being unpatentable over Wynne in view of Hyun and Woong (KR102349837; hereinafter “Hyun”).
Regarding Claim 2, Wynne discloses the 3D image mask analysis method of claim 1, further comprising: ([0025-0027], Wynne discloses defining three-dimensional masked areas 8a/8b based on depth information.).
Wynne does not disclose the operation processor analyzing at least one installation parameter of the image receiver to acquire coordinate information of each pixel within the surveillance image.
Hyun discloses the operation processor analyzing at least one installation parameter of the image receiver to acquire coordinate information of each pixel within the surveillance image (Page 4-5 (also see additional translation found at the end of the attached translated document), Hyun discloses a coordinate mapping unit which calculates a position in the world coordinate system for each pixel in a captures image.).
PNG
media_image1.png
560
1459
media_image1.png
Greyscale
Wynne and Hyun are considered to be analogous to the claimed invention as they are in the same field of using image processing techniques to determine real-world information obtained from an image. Therefore, it would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Wynne such that the coordinate information used by Wynne to determine a three-dimensional mask is obtained by the image processing methods disclosed by Hyun. The motivation for this combination being the ability map per-pixel coordinate information within an image to the real-world, which can be beneficial for determining information regarding particular regions of an image.
Claim 11 is the apparatus claim corresponding to claim 2, and is similarly rejected (see [0004], [0022], Wynne).
Claims 3 and 12 are rejected as being unpatentable over Wynne in view of Liu et al. (US 2012/0114182; hereinafter “Liu”).
Regarding Claim 3, Wynne discloses the 3D image mask analysis method of claim 1.
Wynne does not disclose wherein the 3D image mask analysis method is further applied to the image surveillance apparatus having a stepper motor adapted to control a focusing step of the image receiver, the 3D image mask analysis method further comprises: the operation processor analyzing a plurality of focusing steps of a plurality of areas divided from the surveillance image; and the operation processor analyzing related focusing steps of some of the plurality of areas relevant to the 3D image mask for acquiring the first depth information.
Liu discloses wherein the 3D image mask analysis method is further applied to the image surveillance apparatus having a stepper motor adapted to control a focusing step of the image receiver, the 3D image mask analysis method further comprises (see Fig. 2, specifically driving mechanism 312, Liu discloses an image sensor with a driving mechanism.): the operation processor analyzing a plurality of focusing steps of a plurality of areas divided from the surveillance image (Fig. 11, [0026], Liu discloses capturing a plurality of images at different focus scales.); and the operation processor analyzing related focusing steps of some of the plurality of areas relevant to the 3D image mask for acquiring the first depth information (Fig. 11, [0028-0029], Liu discloses for each region in an image, identifying the focus image which produces the sharpest region, and utilizing lookup table to determine the DOF (depth of field) value, which is used to determine a depth map for the image.).
Wynne and Liu are considered to be analogous to the claimed invention as they are in the same field of using image processing techniques to determine real-world information obtained from an image. Therefore, it would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Wynne such that the depth information utilized by Wynne is obtained based on the focusing methods disclosed by Liu. The motivation for this combination being the ability to utilize features of the imaging device (i.e., obtaining a plurality of focusing steps) as a means to determine depth as opposed to utilizing complex image processing methods or additional imaging sensors.
Claim 12 is the apparatus claim corresponding to claim 3, and is similarly rejected (see [0004], [0022], Wynne).
Claims 4 and 13 are rejected as being unpatentable over Wynne in view of Liu in view of Tsurumi and Oba (US 2020/0241549; hereinafter “Tsurumi”).
Regarding Claim 4, Wynne in view of Liu teaches the 3D image mask analysis method of claim 3, further comprising: the operation processor computing an object distance of the target object relative to the image surveillance apparatus (Fig. 2, [0026-0027], Wynne discloses an individual 14 that is located in an area 13b overlapping with three-dimensional mask 8b, and consequently depth information associated to individual 14.)
The current combination of Wynne in view of Liu does not explicitly teach computing an object distance of the target object relative to the image surveillance apparatus (italicized for context) in accordance with the related focusing steps; and the operation processor determining coordinate information of the target object within the surveillance image in accordance with the object distance.
Liu discloses teach computing an object distance of the target object relative to the image surveillance apparatus (italicized for context) in accordance with the related focusing steps (Fig. 11, [0026-0029], Liu discloses capturing a plurality of images at different focus scales, and for each region in an image, identifying the focus image which produces the sharpest region, and utilizing lookup table to determine the DOF (depth of field) value, which is used to determine a depth map for the image);
It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to further modify the current combination of Wynne in view of Liu such that the Liu’s methods of determining depth are utilized to determine the object distance of the target object. The motivation for this combination being the ability to utilize features of the imaging device (i.e., obtaining a plurality of focusing steps) as a means to determine depth as opposed to utilizing complex image processing methods or additional imaging sensors.
Wynne in view of Liu does not teach and the operation processor determining coordinate information of the target object within the surveillance image in accordance with the object distance.
Tsurumi discloses and the operation processor determining coordinate information of the target object within the surveillance image in accordance with the object distance ([0271], Tsurumi discloses determining a Z-coordinate of a real object based on a separation distance between a camera and the real object.).
Wynne, Liu, and Tsurumi are considered to be analogous to the claimed invention as they are in the same field of using image processing techniques to determine real-world information obtained from an image. Therefore, it would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Wynne in view of Liu such that the depth/distance information obtained by Wynne in view of Liu was used to determine a real-world coordinate position based on the methods disclosed by Tsurumi. The motivation for this combination being the ability to translate the depth/distance information to real-world coordinate information, which is beneficial for locating/positioning objects in the real world.
Claim 13 is the apparatus claim corresponding to claim 4, and is similarly rejected (see [0004], [0022], Wynne).
Claims 5 and 14 are rejected as being unpatentable over Wynne in view of CAD CAM Tutorials (“AutoCAD Draw Rectangle from Center”, Published Date: 05/27/2017, https://www.youtube.com/watch?v=OEFKcPpZNiM; hereinafter “CCT-1”).
Regarding Claim 5, Wynne discloses the 3D image mask analysis method of claim 1, further comprising:
the operation processor acquiring first 3D coordinate information of a cursor located at a first position point and second 3D coordinate information of the cursor located at a second position point within the surveillance image ([0006-0007], [0009], Wynne discloses a selection module which a user can interact with via a graphical interface to select a two-dimensional region such as a rectangle, which is defined in three-dimensional information. The Examiner notes that the process of defining a rectangular region involves defining a first position point and second position point.); and
Wynne does not disclose the operation processor replacing second plane coordinate information of the second 3D coordinate information by first plane coordinate information of the first 3D coordinate information when determining the second position point being located directly above the first position point, and then calibrating height coordinate information of the second 3D coordinate information, so as to acquire a calibrated second 3D coordinate information for defining the 3D image mask.
CCT-1 discloses the operation processor replacing second plane coordinate information of the second 3D coordinate information by first plane coordinate information of the first 3D coordinate information when determining the second position point being located directly above the first position point, and then calibrating height coordinate information of the second 3D coordinate information, so as to acquire a calibrated second 3D coordinate information for defining the 3D image mask (CCT-1 discloses selecting a first point (see 1:33, wherein the first point has coordinates (37.5687, 14.3556)) and consequently selecting a second point (see 1:37, wherein the second point has coordinates (37.5687, 11.4244)). The Examiner notes that the user is able to freely select the second point, but the software (Autodesk AutoCAD) can recognize and “snap” the second point such that it is colinear with the first point, and consequently “replace” the coordinate information. While this specific video provided by CAD CAM Tutorials positions the second point beneath the first point, the Examiner asserts that Autodesk AutoCAD is not limited such that the second point must be selected below the first point, but rather the user is able to select the second point of the line in any direction with respect to the first point (the Examiner notes specifically another video by CAD CAM Tutorials (“AutoCAD Drawing Tutorial for Beginners – 1”, Published Date: 12/26/2019, https://www.youtube.com/watch?v=47_zypTqZe0), wherein between 1:35-1:45 it is shown that the second point can be placed directly above the first point and similarly “snapped” into place). Furthermore, while the video referenced utilizes a two-dimensional coordinate system, Autodesk AutoCAD utilizes the same method to generate three-dimensional objects (see 1:30-2:30 from CAD CAM Tutorials (“AutoCAD 3D Basic Tutorial for Beginners – 1”, Published Date: 06/07/2020, https://www.youtube.com/watch?v=Vm_fQom-Rm8&list=PLrOFa8sDv6jcfsE__xMC2Igl3Y3DXhs8g), where a rectangle is defined in a process similar to the aforementioned videos, and is used as the base to define a 3-dimensional rectangular prism such that the coordinates of each colinear vertex match in the x-coordinate and y-coordinate. Also see 0:35-0:45 from CAD CAM Tutorials (“AutoCAD 3D Dimensioning Tutorial | AutoCAD 3D Dimension in Z Axis | AutoCAD 3D Tips and Tricks”, Published Date: 02/02/2018, https://www.youtube.com/watch?v=P7aIEuDRro4&list=PLrOFa8sDv6jcfsE__xMC2Igl3Y3DXhs8g&index=24) regarding determining a height of three-dimensional shape, which requires utilizing coordinate information of two three-dimensional points.).
PNG
media_image2.png
861
1403
media_image2.png
Greyscale
[AltContent: textbox (1:33 from "AutoCAD Draw Rectangle from Center" (https://www.youtube.com/watch?v=OEFKcPpZNiM), where the first point of the line is defined at (37.5687,14.3556))]
PNG
media_image4.png
774
1213
media_image4.png
Greyscale
[AltContent: textbox (1:37 from "AutoCAD Draw Rectangle from Center", where the second point "snaps” onto the same axis as the first point, as shown by the coordinate information present in the bottom left where the x-coordinate is the “replaced” such that it is the same as the first point.)]
PNG
media_image6.png
860
1404
media_image6.png
Greyscale
[AltContent: textbox (1:35 from "AutoCAD Draw Rectangle from Center", where the user is about to select the second point and the cursor is slightly off-angle from the first point as noted by the coordinates at the bottom left.)]
[AltContent: textbox (2:30 from “AutoCAD 3D Basic Tutorial for Beginners – 1” (https://www.youtube.com/watch?v=Vm_fQom-Rm8&list=PLrOFa8sDv6jcfsE__xMC2Igl3Y3DXhs8g), where a 3D rectangular prism is defined based on a 2D rectangle base. Note that the colinear vertices circled in red have the same x-coordinate and y-coordinate, and a different z-coordinate.)]
PNG
media_image8.png
855
1398
media_image8.png
Greyscale
PNG
media_image9.png
853
1403
media_image9.png
Greyscale
[AltContent: textbox (1:42 from “AutoCAD Drawing Tutorial for Beginners – 1” (https://www.youtube.com/watch?v=47_zypTqZe0), showing that the second point can be placed directly above the first point in a manner similar to what is shown in “AutoCAD Draw Rectangle from Center”)]
Wynne and CCT-1 are considered to be analogous to the claimed invention as they are in the same field of using determining coordinate information for defining regions in a three-dimensional space. Therefore, it would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Wynne such that the process of defining a rectangular area, as disclosed by Wynne, utilized the processes and logic disclosed by CAD CAM Tutorials in order to correct the positioning of the second coordinate. The motivation for this combination being the ability to correct user input such defined shapes can be easily made.
Claim 14 is the apparatus claim corresponding to claim 5, and is similarly rejected (see [0004], [0022], Wynne).
Claims 6 and 15 are rejected as being unpatentable over Wynne in view of CAD CAM Tutorials (“AutoCAD 3D Basic Tutorial for Beginners - 1”, Published Date: 06/07/2020, https://www.youtube.com/watch?v=Vm_fQom-Rm8&list=PLrOFa8sDv6jcfsE__xMC2Igl3Y3DXhs8g; hereinafter “CCT-2”).
Regarding Claim 6, Wynne discloses the 3D image mask analysis method of claim 1, further comprising:
the operation processor acquiring first 3D coordinate information of a cursor located at a first position point within the surveillance image ([0006-0007], [0009], Wynne discloses a selection module which a user can interact with via a graphical interface to select a two-dimensional region such as a rectangle.); and
Wynne does not disclose the operation processor generating a virtual auxiliary line upwardly extended from the first position point for defining the 3D image mask via the virtual auxiliary line; wherein plane coordinate information of all pixels of the virtual auxiliary line are the same as first plane coordinate information of the first 3D coordinate information.
CCT-2 discloses the operation processor generating a virtual auxiliary line upwardly extended from the first position point for defining the 3D image mask via the virtual auxiliary line; wherein plane coordinate information of all pixels of the virtual auxiliary line are the same as first plane coordinate information of the first 3D coordinate information (CCT-2 discloses generating a three-dimensional shape by extruding a two-dimensional shape. The Examiner specifically notes 2:20-2:40, wherein the shape is extended in the z-dimension and a virtual line from one corner point is shown extending virtually upwards, wherein all the points on the virtual line will share the same x-coordinate and y-coordinate.).
[AltContent: textbox (Screenshot from “AutoCAD 3D Basic Tutorial for Beginners – 1” (https://www.youtube.com/watch?v=Vm_fQom-Rm8&list=PLrOFa8sDv6jcfsE__xMC2Igl3Y3DXhs8g), where a 3D rectangular prism is defined based on a 2D rectangle base. As the 3D volume is rendered, an auxiliary lines (as noted above) is shown where all the points on the line have the same x-coordinate and y-coordinate.)]
PNG
media_image11.png
504
787
media_image11.png
Greyscale
Wynne and CCT-2 are considered to be analogous to the claimed invention as they are in the same field of using determining coordinate information for defining regions in a three-dimensional space. Therefore, it would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Wynne such that it incorporates the virtual auxiliary line disclosed by CCT-2. The motivation for this combination being the ability to help guide user input when generating an image mask.
Claim 15 is the apparatus claim corresponding to claim 6, and is similarly rejected (see [0004], [0022], Wynne).
Claims 7 and 16 are rejected as being unpatentable over Wynne in view of Kim (KR20130105246; hereinafter “Kim”) in view of Venetianer et al. (US 2013/0184592; hereinafter “Venetianer”).
Regarding Claim 7, Wynne discloses the 3D image mask analysis method of claim 1, further comprising: ([0018], [0027], Wynne discloses determining a difference between depth/distance between an object/individual (i.e., second depth information) and an occlusion region (i.e., first depth information), which is used in the determination of whether an object/individual should be present in an image. The Examiner asserts that the decision on whether the object/individual should be present in an image is based on using the difference and comparing it to some threshold (i.e., depth threshold, where “threshold” is interpreted to mean any value or condition used to define a decision), such that a binary decision is reached (i.e., whether the object/individual should be present in the image or should not be present in the image). For example, the Examiner notes [0014] wherein Wynne provides as an example the masked/obscured area being 3 meters away from the camera. An individual in front of the obscured area (e.g., 2.5 meters from the camera) will be present in the image, and the difference (3 meters – 2.5 meters = 0.5 meter) is smaller than a depth threshold (e.g. 1 meter) and is therefore still displayed on the image.)
Wynne does not disclose the operation processor utilizing the object identification technology to acquire height information of the target object, and the height information is greater than or equal to a height threshold.
Kim discloses the operation processor utilizing the object identification technology to acquire height information of the target object (Page 4, Page 6, Kim discloses obtaining height information on a Z-axis relating to a subject.),
PNG
media_image12.png
136
1246
media_image12.png
Greyscale
PNG
media_image13.png
334
1686
media_image13.png
Greyscale
Wynne and Kim are considered to be analogous to the claimed invention as they are in the same field of using determining coordinate information for defining regions in a three-dimensional space. Therefore, it would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Wynne such that it incorporates Kim’s methods of obtaining three dimensional height information relating to an object. The motivation for this combination being the ability to account for height information when determining the proximity/distance of an object to a desired region/area.
Wynne in view of Kim does not teach the height information is greater than or
equal to a height threshold.
Venetianer discloses the height information is greater than or equal to a height threshold ([0053], [0069-0070], [0082], Venetianer discloses obtaining a real-world height associated to with a person/object, and consequently using a height threshold to determine portions relevant portions of an image (i.e., portions of an image greater than a height threshold) compared to portions of an image which need to be removed (i.e., a shadow present at a height lower than a height threshold).).
Wynne, Kim, and Venetianer are considered to be analogous to the claimed invention as they are in the same field of analyzing objects in three-dimensional space. Therefore, it would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Wynne in view of Kim such that it incorporated the logic used by Venetianer in order to display objects only when the height of the object is greater than a height threshold. The motivation for this combination being the ability to consider relevant objects which are large enough (i.e., a person compared to a small object such as a dog).
Claim 16 is the apparatus claim corresponding to claim 7, and is similarly rejected (see [0004], [0022], Wynne).
Claims 8 and 17 are rejected as being unpatentable over Wynne in view of Kong (KR102474697, hereinafter “Kong”).
Regarding Claim 8, Wynne discloses the 3D image mask analysis method of claim 1.
Wynne does not disclose the operation processor further acquiring object coordinate information of several pixels contained by a contour of the target object within the surveillance image; and the operation processor replacing mask pixel information of the 3D image mask inside object coordinate information by object pixel information of the several pixels of the target object.
PNG
media_image14.png
551
1486
media_image14.png
Greyscale
Kong discloses the operation processor further acquiring object coordinate information of several pixels contained by a contour of the target object within the surveillance image (Pages 8-9, Kong discloses a process of obtaining pixel information of a masking target 3 and comparing it to pre-stored information. In regions where there is low match, those pixels therefore represent pixel information associated to a monitoring target 5 (i.e., target object).); and the operation processor replacing mask pixel information of the 3D image mask inside object coordinate information by object pixel information of the several pixels of the target object (Pages 8-9, Kong discloses determining regions where pixels of a current image and pixels from a pre-stored image do not match (i.e., representing a target object), and consequently that portion of the image is not shielded by the privacy mask 112. Also see [0027], Wynne discloses having an individual be present in an edited image and not be covered by the mask.).
Wynne and Kong are considered to be analogous to the claimed invention as they are in the same field of using image processing techniques to determine real-world information obtained from an image. Therefore, it would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Wynne such that the process displaying the target object by replacing pixels is done through the process disclosed by Kong. The motivation for this combination being the ability to use a direct pixel comparison method in order to determine exact regions of an image which should be replaced.
Claim 17 is the apparatus claim corresponding to claim 8, and is similarly rejected (see [0004], [0022], Wynne).
Claims 9 and 18 are rejected as being unpatentable over Wynne in view of CVAT (“CVAT Product Tour #2: How to annotate Bounding Boxes.”, Published Date: 11/29/2022, https://www.youtube.com/watch?v=prsxK2BeG6k, hereinafter “CVAT”)
Regarding Claim 9, Wynne discloses the 3D image mask analysis method of claim 1.
Wynne does not disclose the operation processor setting a first position point of a cursor within the surveillance image as a first corner point of an auxiliary box with a known size, and at least acquiring a second corner point and a third corner point of the auxiliary box; and the operation processor adjusting coordinate information of the second corner point and/or the third corner point in accordance with an input command, so as to change a coverage range of the auxiliary box for defining the 3D image mask.
[AltContent: textbox (2:09 from “CVAT Product Tour #2: How to annotate Bounding Boxes” (https://www.youtube.com/watch?v=prsxK2BeG6k), where a yellow bounding box is defined and a user can change the shape of the bounding box by dragging the corners.)]
PNG
media_image15.png
858
1407
media_image15.png
Greyscale
CVAT discloses the operation processor setting a first position point of a cursor within the surveillance image as a first corner point of an auxiliary box with a known size, and at least acquiring a second corner point and a third corner point of the auxiliary box; and the operation processor adjusting coordinate information of the second corner point and/or the third corner point in accordance with an input command, so as to change a coverage range of the auxiliary box for defining the 3D image mask (see 2:00-2:20, CVAT discloses generating a bounding box, from which a corner can be selected (note the red circled corner in the image below) to resize the bounding box.).
Wynne and CVAT are considered to be analogous to the claimed invention as they are in the same field of defining bounding boxes and regions in images. Therefore, it would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Wynne such that the bounding region defined by Wynne can be modified using the methods and logic defined by CVAT. The motivation for this combination is the ability to improve the user experience and to give the ability to easily change the shaping of the bounding box.
Claim 18 is the apparatus claim corresponding to claim 9, and is similarly rejected (see [0004], [0022], Wynne).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PROMOTTO TAJRIAN ISLAM whose telephone number is (703)756-5584. The examiner can normally be reached Monday - Friday 8:30 am - 5:00 pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chan Park can be reached at (571) 272-7409. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/PROMOTTO TAJRIAN ISLAM/Examiner, Art Unit 2669 /CHAN S PARK/Supervisory Patent Examiner, Art Unit 2669