DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1, 59-61 and 130 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. This analysis is based on the 2024 Guidance Update on Patent Subject Matter Eligibility, Including on Artificial Intelligence (2024 AI SME Update) published on July 17, 2024 (89 FR 58128).
Step 1:
Claims 1, 59-61 and 130 are directed to a method and a system which fall under the statutory categories of invention of methods and machines. Therefore, step 1 is met.
Step 2A, Prong 1:
Claims 1 and 130 recite “recognizing the 3D character based on the point cloud data of the 3D character in the depth information, wherein: the 3D character includes more than two characters, and the recognizing the 3D character based on the point cloud data of the 3D character in the depth information includes: segmenting the 3D character based on the point cloud data of the 3D character to obtain point cloud data of a segmented 3D character; and determining a character outline based on the point cloud data of the segmented 3D character.” The limitations, excluding the processor, therefore fall within the mental process groupings of abstract ideas because they cover concepts performed in the human mind, including observation, evaluation, judgment, and opinion. Under its broadest reasonable interpretation when read in light of the specification, the use of one or more neural networks encompasses mental processes practically performed in the human mind. See MPEP 2106.04(a)(2), subsection III.
Dependent claims 59-61 further clarify previously established limitations that may be practically performed in the human mind using observation, evaluation, judgement, and opinion. For example, determining the point cloud data of the 3D character based on the overall point cloud data and the 3D model information can be accomplished by a person observing both sets of data and acknowledging what areas match. The same applies to the point cloud data as applied to the workpiece surface. As for base plane point cloud data exceeding a threshold, a person could observe what points lie outside the base plane at a particular distance as being included in the 3D character data.
Step 2A, Prong 2:
The limitations of claim 130 are recited as being performed by a “processor”. The processor is recited at a high level of generality. The processor is used to perform an abstract idea, as discussed above in Step 2A, Prong One, such that it amounts to no more than mere instructions to apply the exception using a generic computer. See MPEP 2106.05(f), which provides the following considerations for determining whether a claim simply recites a judicial exception with the words “apply it” (or an equivalent), such as mere instructions to implement an abstract idea on a computer: (1) whether the claim recites solution to a problem is accomplished; (2) whether the claim invokes computers or other machinery merely as a tool to perform an existing process; and (3) the particularity or generality of the application of the judicial exception.
In evaluation of whether the invention integrates into a practical application, it should be clear that the claimed invention improves the functioning of a computer or improves another technology or technical field. To evaluate an improvement to a computer or technical field, the specification must set forth an improvement in technology and the claim itself must reflect the disclosed improvement. See MPEP 2106.04(d)(1) and 2106.05(a).
According to the specification, the improvement is to increase the accuracy of the character outline, which thereby increases the recognition accuracy of the 3D character. This is stated as being accomplished by generating added point cloud data and projecting that depth information into a 2D image for subsequent character recognition. While this is properly reflected in dependent claims 10, 14, 19, 22 and 129, which notably lack 101 rejections, this is not clearly demonstrated in claims 1, 59-61 and 130.
Step 2B:
In claims 1, 59 and 130, the limitations of “obtaining depth information of the 3D character, wherein the depth information includes point cloud data of the 3D character” and “obtaining overall point cloud data of the 3D character and a surface of a workpiece where the 3D character is located” and “obtaining 3D model information of the surface of the workpiece where the 3D character is located” amounts to merely receiving data. These limitations are considered to be insignificant extra-solution activity. In consideration, this limitation is further evaluated to take into account whether or not the extra-solution activity is well understood, routine, and conventional in the field. See MPEP 2106.05(g). Receiving data is very well understood and routine in the field and therefore these do not add an inventive concept to the claims.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1, 10, 14, 15 and 130 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Zhang et al. (CN 107292309, utilizing a machine translation).
Regarding claim 1, Zhang et al. discloses a method for recognizing a three-dimensional (3D) character, comprising:
obtaining depth information of the 3D character, wherein the depth information includes point cloud data of the 3D character (“By scanning the entire area of the character being measured from top to bottom or from left to right with the line structured light emitted by the laser, the initial point cloud data containing all the information of the three-dimensional character being measured can be obtained” at paragraph 0010, last sentence); and
recognizing the 3D character based on the point cloud data of the 3D character in the depth information (“Step 4: Display the remaining point cloud from the normal direction of the plane to obtain the image of the character.” at paragraph 0013), wherein:
the 3D character includes more than two characters (see Figures 2 and 4 for display of the multiple characters), and the recognizing the 3D character based on the point cloud data of the 3D character in the depth information includes:
segmenting the 3D character based on the point cloud data of the 3D character to obtain point cloud data of a segmented 3D character (“Step 3: Use the plane equation fitted in Step 2 to cut the overall point cloud data. Points in the point cloud data that are less than 0.3 mm below the plane are identified as points with character depth information and are retained.” at paragraph 0012); and
determining a character outline based on the point cloud data of the segmented 3D character (Figure 4 demonstrates the outlines of the characters as output).
Regarding claim 130, Zhang et al. discloses a system, comprising:
at least one storage device including a set of instructions (implied that a programmed computer carries out the processing); and
at least one processor configured to communicate with the at least one storage device (processor of the implied computer), wherein when executing the set of instructions, the at least one processor is configured to direct the system to perform operations including:
obtaining depth information of the 3D character, wherein the depth information includes point cloud data of the 3D character (“By scanning the entire area of the character being measured from top to bottom or from left to right with the line structured light emitted by the laser, the initial point cloud data containing all the information of the three-dimensional character being measured can be obtained” at paragraph 0010, last sentence); and
recognizing the 3D character based on the point cloud data of the 3D character in the depth information (“Step 4: Display the remaining point cloud from the normal direction of the plane to obtain the image of the character.” at paragraph 0013), wherein:
the 3D character includes more than two characters (see Figures 2 and 4 for display of the multiple characters), and the recognizing the 3D character based on the point cloud data of the 3D character in the depth information includes:
segmenting the 3D character based on the point cloud data of the 3D character to obtain point cloud data of a segmented 3D character (“Step 3: Use the plane equation fitted in Step 2 to cut the overall point cloud data. Points in the point cloud data that are less than 0.3 mm below the plane are identified as points with character depth information and are retained.” at paragraph 0012); and
determining a character outline based on the point cloud data of the segmented 3D character (Figure 4 demonstrates the outlines of the characters as output).
Regarding claim 10, Zhang et al. discloses a method wherein the segmenting the 3D character based on the point cloud data of the 3D character includes:
determining a reference baseline based on the 3D character (“Step 2: Apply the overall least squares method to construct the plane equations of the initial point cloud data” at paragraph 0011, line 1);
obtaining a projection result by projecting the point cloud data of the 3D character to the reference baseline (“Step 3: Use the plane equation fitted in Step 2 to cut the overall point cloud data. Points in the point cloud data that are less than 0.3 mm below the plane are identified as points with character depth information” at paragraph 0012, line 1); and
determining a segmentation boundary based on the projection result (“Step 4: Display the remaining point cloud from the normal direction of the plane to obtain the image of the character.” at paragraph 0013).
Regarding claim 14, Zhang et al. discloses a method wherein the determining a character outline based on the point cloud data of the segmented 3D character includes:
determining a reference base plane based on the point cloud data of the segmented 3D character (“Step 2: Apply the overall least squares method to construct the plane equations of the initial point cloud data” at paragraph 0011, line 1);
obtaining a projection result by projecting the point cloud data of the segmented 3D character to the reference base plane (“Step 3: Use the plane equation fitted in Step 2 to cut the overall point cloud data. Points in the point cloud data that are less than 0.3 mm below the plane are identified as points with character depth information” at paragraph 0012, line 1); and
determining the character outline of the segmented 3D character based on the projection result (“Step 4: Display the remaining point cloud from the normal direction of the plane to obtain the image of the character.” at paragraph 0013).
Regarding claim 15, Zhang et al. discloses a method wherein the determining a reference base plane based on the segmented 3D character includes:
determining the reference base plane based on a predetermined algorithm and the point cloud data of the segmented 3D character, wherein: the predetermined algorithm includes a least square algorithm (“Step 2: Apply the overall least squares method to construct the plane equations of the initial point cloud data” at paragraph 0011, line 1) or a Chebyshev algorithm.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 19 and 128 rejected under 35 U.S.C. 103 as being unpatentable over the combination of Zhang et al. and Zhang et al. (CN 110375645, hereinafter Zhang ‘645, utilizing a machine translation).
Regarding claim 19, Zhang et al. discloses the elements of claim 1 as described above.
Zhang et al. does not explicitly disclose that recognizing the 3D character based on the point cloud data of the 3D character in the depth information further includes: determining a corresponding 2D image based on a recognition result of the character outline of the 3D character, wherein the 2D image is a binarized image; and the determining a corresponding 2D image based on a recognition result of the character outline of the 3D character includes: marking a point corresponding to the character outline of the 3D character as 0: marking a point corresponding to a background other than the 3D character as 1: or marking the point corresponding to the character outline of the 3D character as 1: marking the point corresponding to the background other than the 3D character as 0.
Zhang ‘645 teaches a method in the same field of endeavor of character recognition, wherein the recognizing the 3D character based on the point cloud data of the 3D character in the depth information further includes:
determining a corresponding 2D image based on a recognition result of the character outline of the 3D character (“S421. Project the binarized image of the character along the x-axis and y-axis respectively,
that is, accumulate the pixel values of the binarized image to obtain two projected images” at paragraph 0116, line 1; “In one embodiment, step S4, the method for extracting a single character, as a functional option, also includes a character recognition algorithm, i.e., extracting and recognizing characters;” at paragraph 0118, line 1), wherein the 2D image is a binarized image (“S41. Perform binarization on the point cloud data of the entire character surface.” at paragraph 0111, line 1); and
the determining a corresponding 2D image based on a recognition result of the character outline of the 3D character includes:
marking a point corresponding to the character outline of the 3D character as 0: marking a point corresponding to a background other than the 3D character as 1: or
marking the point corresponding to the character outline of the 3D character as 1: marking the point corresponding to the background other than the 3D character as 0 (“mark the positions of character points as 1 and the rest as 0 to generate a binarized character image” at paragraph 0111, line 2).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to utilize the character isolation as taught by Zhang ‘645 in extracting the character information of Zhang et al. to “accurately obtain the size of characters, the spacing between adjacent characters, and the depth or height of characters” (Zhang ‘645 at paragraph 0123, line 4).
Regarding claim 128, Zhang et al. discloses the elements of claim 10 as described above.
Zhang et al. does not explicitly disclose that determining a segmentation boundary based on the projection result includes: determining a region where a count of projection points lower than a predetermined threshold as a segmentation region based on distribution of the projection points on the reference baseline, or determining a region where a change value in the count of projection points greater than a predetermined difference as the segmentation region; and determining the segmentation boundary based on the segmentation region.
Zhang ‘645 teaches a method in the same field of endeavor of character recognition, the determining a segmentation boundary based on the projection result includes:
determining a region where a count of projection points (“S2223. Count the number of remaining points on the original contour curve that fall on the fitted contour curve;” at paragraph 0021) lower than a predetermined threshold as a segmentation region based on distribution of the projection points on the reference baseline (“S2224. Determine whether the number of remaining points falling on the fitted contour curve has reached the maximum. If yes, proceed to step S2226; otherwise, proceed to step S2225” at paragraph 0022; “S2225. Remove points that fall outside the fitted wheel hub curve and return to step S2222” at paragraph 0023; therefore, there must by points less than the maximum to end the fitting), or determining a region where a change value in the count of projection points greater than a predetermined difference as the segmentation region; and
determining the segmentation boundary based on the segmentation region (“S31. Calculate the distance from each point on the scan profile curve to the polynomial curve fitted to the profile curve” at paragraph 0026; “S32. Based on the concave and convex features of the characters and the preset depth/height range of the characters, filter out the character point cloud data” at paragraph 0027).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to utilize the character isolation as taught by Zhang ‘645 in extracting the character information of Zhang et al. to “accurately obtain the size of characters, the spacing between adjacent characters, and the depth or height of characters” (Zhang ‘645 at paragraph 0123, line 4).
Claim(s) 22, 23 and 59-61 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Zhang et al. and Link et al. (US 2019/0156472).
Regarding claim 22, Zhang et al. discloses the elements of claim 1 as described above.
Zhang et al. does not explicitly disclose that recognizing the 3D character based on the point cloud data of the 3D character in the depth information includes: generating at least one added point cloud data by processing the point cloud data of the 3D character; and determining a character outline of the 3D character based on the at least one added point cloud data and the point cloud data of the 3D character.
Link et al. teaches a method in the same field of endeavor of workpiece inspection, wherein recognizing the object based on the point cloud data of the 3D object in the depth information includes:
generating at least one added point cloud data by processing the point cloud data of the 3D object; and determining an object outline of the 3D object based on the at least one added point cloud data and the point cloud data of the 3D object (“The corrected 3-D point cloud from the laser scans of the object is then interpolated to a predetermined geometric grid for comparison to the CAD model” at paragraph 0078, line 5; the interpolated point cloud data therefore corresponds to the object outline in an additional set of point cloud data).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to utilize the interpolation and matching as taught by Link et al. on the character point cloud data of Zhang et al. to ensure the collected point cloud data is rectified for comparison of the stored model data for subsequent recognition.
Regarding claim 23, the Zhang et al. and Link et al. combination discloses a method wherein the determining a character outline of the 3D character based on the at least one added point cloud data and the point cloud data of the 3D character includes:
determining projection data based on the at least one added point cloud data and the point cloud data of the 3D character (“The system processing unit retrieves the CAD point cloud that was determined to match the part outline from a part recognition & lookup process. The CAD point cloud is rotated so that it matches the determined coordinate geometry of the object. The corrected 3-D point cloud from the laser scans of the object is then interpolated to a predetermined geometric grid for comparison to the CAD model. Through subtractive reasoning, the interpolated corrected 3-D point cloud and CAD model are paired and a series of D values are calculated for, and associated with, each point in the point cloud” Link et al. at paragraph 0078, line 1; the aligning of the interpolated point cloud data and the cad point cloud constitutes a projection of data into a shared coordinate space); and
determining the character outline of the 3D character based on the projection data (given that the model and interpolated data match each other sufficiently, the character is deemed to be recognized).
Regarding claim 59, Zhang et al. discloses a method as described in claim 1 above.
Zhang et al. does not explicitly disclose that obtaining depth information of the 3D character includes obtaining the point cloud data of the 3D character, including: obtaining overall point cloud data of the 3D character and a surface of a workpiece where the 3D character is located; obtaining 3D model information of the surface of the workpiece where the 3D character is located; and determining the point cloud data of the 3D character based on the overall point cloud data and the 3D model information.
Link et al. teaches a method in the same field of endeavor of workpiece inspection, wherein the obtaining depth information of the 3D object includes obtaining the point cloud data of the 3D object, including:
obtaining overall point cloud data of the 3D object and a surface of a workpiece where the 3D object is located (“The laser module 200 may be, but is not limited to, a laser profilometer with an illumination light beam having a wavelength in the violet or ultraviolet range, used to scan the object and output a three-dimensional (3-D) unstructured point cloud” at paragraph 0027, line 18);
obtaining 3D model information of the surface of the workpiece where the 3D object is located (“The system processing unit retrieves the CAD point cloud that was determined to match the part outline from a part recognition & lookup process” at paragraph 0078, line 1); and
determining the point cloud data of the 3D object based on the overall point cloud data and the 3D model information (“The corrected 3-D point cloud from the laser scans of the object is then interpolated to a predetermined geometric grid for comparison to the CAD model” at paragraph 0078, line 5; the matched and aligned point cloud of the object is therefore the resulting point cloud of the extracted object).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to utilize the interpolation and matching as taught by Link et al. on the character point cloud data of Zhang et al. to ensure the collected point cloud data is rectified for comparison of the stored model data for subsequent recognition.
Regarding claim 60, the Zhang et al. and Link et al. combination discloses a method wherein the determining the point cloud data of the 3D character based on the overall point cloud data and the 3D model information includes:
determining point cloud data of the surface of the workpiece in the overall point cloud data based on the overall point cloud data and the 3D model information (“In an alternative embodiment, the surface dimensions of the objects 160 in the digital images 810 may be determined, and used as an alternative to the pattern matching used in the part recognition and lookup process 860” Link et al. at paragraph 0052, last sentence); and
determining the point cloud data of the 3D character based on the point cloud data of the surface of the workpiece (“The corrected 3-D point cloud from the laser scans of the object is then interpolated to a predetermined geometric grid for comparison to the CAD model” Link et al. at paragraph 0078, line 5; the matched and aligned point cloud of the object is therefore the resulting point cloud of the extracted object).
Regarding claim 61, the Zhang et al. and Link et al. combination discloses a method wherein the determining point cloud data of the surface of the workpiece in the overall point cloud data based on the overall point cloud data and the 3D model information includes:
determining base plane point cloud data in the overall point cloud data that matches the 3D model information of the surface of the workpiece based on a predetermined algorithm (“In an alternative embodiment, the surface dimensions of the objects 160 in the digital images 810 may be determined, and used as an alternative to the pattern matching used in the part recognition and lookup process 860” Link et al. at paragraph 0052, last sentence); and
determining point cloud data having a distance from the base plane point cloud data exceeding a predetermined threshold, as the point cloud data of the 3D character (“Step 3: Use the plane equation fitted in Step 2 to cut the overall point cloud data. Points in the point cloud data that are less than 0.3 mm below the plane are identified as points with character depth information and are retained.” Zhang et al. at paragraph 0012).
Claim(s) 24-26 and 28 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Zhang et al. and Link et al. as applied to claim 22 above, and further in view of Walz et al. (US 11,274,921).
Regarding claim 24, the Zhang et al. and Link et al. combination discloses a method as described in claim 22 above.
The Zhang et al. and Link et al. combination does not explicitly disclose that the generating at least one added point cloud data by processing the point cloud data of the 3D character includes: obtaining position information of two point cloud data in point cloud data of single characters after segmentation; and generating added point cloud data between the two point cloud data based on the position information of the two point cloud data.
Walz et al. teaches a method in the same field of endeavor of point cloud image processing, wherein the generating at least one added point cloud data by processing the point cloud data of the 3D object includes:
obtaining position information of two point cloud data in point cloud data (“Interpolation between some of the data points corresponding to the unobstructed surface (data 106) adjacent the data points corresponding to the obstacle 104 (data 108) may then be performed via the processor unit 52” at col. 6, line 32); and
generating added point cloud data between the two point cloud data based on the position information of the two point cloud data (“Interpolation data points corresponding to the portion of the surface 106 obscured by the obstacle 104 may then be generated via the processor unit, as shown in block 212” at col. 6, line 36).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to utilize the additional interpolation as taught by Walz et al. for the character data of the Zhang et al. and Link et al. combination to be able to fill in sparse or obstructed character data prior to recognition.
Regarding claim 25, the Zhang et al. and Link et al. combination discloses a method as described in claim 22 above.
The Zhang et al. and Link et al. combination does not explicitly disclose that generating at least one added point cloud data by processing the point cloud data of the 3D character includes: obtaining position information of two adjacent point cloud data in the point cloud data of the 3D character; and generating the added point cloud data between the two adjacent point cloud data based on the position information of the two adjacent point cloud data.
Walz et al. teaches a method in the same field of endeavor of point cloud image processing, wherein the generating at least one added point cloud data by processing the point cloud data of the 3D character includes:
obtaining position information of two adjacent point cloud data in the point cloud data of the 3D character (“Interpolation between some of the data points corresponding to the unobstructed surface (data 106) adjacent the data points corresponding to the obstacle 104 (data 108) may then be performed via the processor unit 52” at col. 6, line 32); and
generating the added point cloud data between the two adjacent point cloud data based on the position information of the two adjacent point cloud data (“Interpolation data points corresponding to the portion of the surface 106 obscured by the obstacle 104 may then be generated via the processor unit, as shown in block 212” at col. 6, line 36).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to utilize the additional interpolation as taught by Walz et al. for the character data of the Zhang et al. and Link et al. combination to be able to fill in sparse or obstructed character data prior to recognition.
Regarding claim 26, the Zhang et al., Link et al. and Walz et al. combination discloses a method wherein the obtaining position information of two adjacent point cloud data in the point cloud data of the 3D character includes:
obtaining position information of at least one of the point cloud data of the 3D character to be as position information of initial point cloud data (“Interpolation between some of the data points corresponding to the unobstructed surface (data 106) adjacent the data points corresponding to the obstacle 104 (data 108) may then be performed via the processor unit 52” Walz et al. at col. 6, line 32); and
determining position information of adjacent point cloud data near the initial point cloud data based on the position information of the initial point cloud data and a distance threshold (a nearby point corresponding to a point cloud data point next to the missing data is chosen as the second point as described above).
Regarding claim 28, Link et al. discloses a method wherein the generating at least one added point cloud data between two point cloud data includes:
generating the added point cloud data between two point cloud data based on an interpolation algorithm (“Interpolation between some of the data points corresponding to the unobstructed surface (data 106) adjacent the data points corresponding to the obstacle 104 (data 108) may then be performed via the processor unit 52” at col. 6, line 32).
Allowable Subject Matter
Claims 11, 27, 127 and 129 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter: the prior art does not teach or disclose determining the reference baseline based on midlines of the 3D character in a height direction and a thickness direction and the projecting the point cloud data of the 3D character to the reference baseline includes: projecting the point cloud data of the 3D character to the reference baseline in a direction perpendicular to the reference baseline as required by claim 11; the distance threshold is greater than a resolution of the character information acquisition device, and the resolution is used to reflect a distance between two adjacent points in the point cloud data acquired by the character information acquisition device as required by claim 27; determining the single character reference baseline based on midlines of the 3D character in a height direction and a thickness direction, wherein: the reference baseline includes a single line reference baseline for segmenting the 3D character into a plurality of single line characters and in response to determining that the single line reference baseline is parallel to the height direction of the 3D character, determining the single line reference baseline as required by claim 127; establishing a 2D cylindrical plane coordinate system by unfolding a cylindrical plane of the 3D cylindrical coordinate system, establishing a conversion relationship between 3D cylindrical coordinates and cylindrical coordinates of the 2D cylindrical plane coordinate system of each point in the point cloud data of the segmented 3D character; and transforming point cloud data from the 3D cylindrical coordinate system to the 2D cylindrical plane coordinate system, generating 2D pixel coordinates of the segmented 3D character, and determining the character outline according to 2D pixel coordinates of each point of a single character as required by claim 129.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KATRINA R FUJITA whose telephone number is (571)270-1574. The examiner can normally be reached Monday - Friday 9:30-5:30 pm ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sumati Lefkowitz can be reached at 5712723638. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/KATRINA R FUJITA/ Primary Examiner, Art Unit 2672