DETAILED ACTION
This action is in response to the remarks and amendments filed on August 28th, 2025. Claims 1-28 are pending and have been examined.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-9 are rejected under 35 U.S.C. 103 as being unpatentable over US20120027320 (herein after referred to by its primary author, Nakazono) in view of US20140010479 (herein after referred to by its primary author, Kwon).
In regards to claim 1, Nakazono teaches an image processing device comprising: a buffer configured to: parallelize pixel data of an image that is received from an external device Nakazono Paragraph [0019] “The line memory unit 523 temporarily stores input image data input from the image process circuit 5-1 for each unit line. Also, the line memory unit 523 includes storage areas for a predetermined plurality of unit lines”); and a distortion interpolator configured to read interpolation data among the pixel data that are stored in the line memories, based on coordinate information of a target pixel, which is a distorted pixel, and configured to perform the distortion interpolation operation based on the interpolation data (Nakazono Paragraph [0069] “The interpolation calculating unit 525 generates output image data by carrying out the interpolation calculation on the input image data stored in the line memory unit 523 in response to the control signal for starting the interpolation calculation input from the memory control unit 524.”).
Nakazono does not teach determining a pixel unit which indicates a number of pixels whose pixel data is stored in one line memory among line memories, based on a number of horizontal direction pixels that are used for a distortion interpolation operation; and parallelizing pixel data of an image that is received from an external device based on the pixel unit.
However, Kwon teaches determining a pixel unit which indicates a number of pixels whose pixel data is stored in one line memory among line memories, based on a number of horizontal direction pixels that are used for a distortion interpolation operation (Kwon Paragraph [0047] “In the related art, by configuring the line memories having a horizontal size of an image for the above memory so that the line memories have the same number as the maximum correction in a vertical direction, the image data are written in the line memory while alternating input write and read cycles and the four reference pixels to be interpolated are simultaneously read from the four line memories to calculate the distortion corrected pixel values without delaying the processing time due to the data access.” Examiner note: In this reference, prior art is described which is known to have line memories that vary in size based on the size of the picture to be processed. That is, if the image to be process contains 100x100 pixels, each line memory can be configured to have a size of 100 pixels. If the image contains 200x200 pixels, each line memory can be configured to have a size of 200 pixels, and so on. In these examples, the pixel unit is equal to the horizontal size of the image, which is analogous to determining a pixel size based on the number of horizontal direction pixels.); and parallelizing pixel data of an image that is received from an external device based on the pixel unit. (Kwon Paragraph [0042] “Referring to FIG. 3, the interpolation method according to another example is a method of sequentially storing the input image data in a plurality of line memories 302 and then, simultaneously reading the four pixels to be referenced at the same timing to obtain an interpolation pixel values without the delay time.”)
Kwon is considered analogous to the claimed invention because they are both in the same field of image pixel interpolation. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Nakazono to include the teachings of Kwon, to provide the benefit of using less memory as an entire picture is not stored in line memory, and less complicated hardware. (Kwon Paragraph [0042] “The method is generally the most frequently used technology at present and has an advantage in that there is no need to store all the frame data and hardware is relatively less complicated, but has a problem in that when the distortion correction or the geometrical conversion of the input images is performed, the number of line memories to be configured according to the referenced interpolation value in a vertical direction is increased proportionately.”)
In regards to claim 2, Nakazono in view of Kwon teaches the image processing device of claim 1, wherein the buffer comprises the line memories (Nakazono Paragraph [0074] “The line memory unit 523 starts an operation of temporarily storing input image data for each unit line.”).
Nakazono in view of Kwon does not explicitly teach wherein a number of line memories exceeds an addition of twice the maximum number of distortion lines of the image and half a number of lines that are used for the distortion interpolation operation.
However, Nakazono in view of Kwon does teach Nakazono Paragraph [0019] “storage areas for a predetermined plurality of unit lines” and Nakazono Paragraph [0068] “If input image data of the number of unit lines of the post-ULB and the pre-ULB to be used when the interpolation calculation is carried out is provided within the line memory unit 523, a control signal for starting the interpolation calculation is output to the interpolation calculating unit 525.” which suggests that the number of line memories can be a range of values and can increase or decrease based on the amount of data needed. It would have been obvious to one of ordinary skill in the art to arrive at the claimed range as a result of routine optimization. One of ordinary skill in the art would have had a reasonable expectation of success to formulate the claimed range because Nakazono Paragraph [0021] “FIG. 10 shows a storage area capable of storing input image data for 16 lines (unit lines) included as the line memory unit 523, a line of input image data before a distortion correction corresponding to output image data to be interpolated when output image data of one row (line) is output, and a range (the number of unit lines) of input image data necessary for an interpolation calculation. In the example shown in FIG. 10, an example in which output image data is generated by the bicubic interpolation using a line where input image data before a distortion correction corresponding to the output image data is stored, input image data of its previous line, and input image data for up to two subsequent lines.”, which suggests that the number of lines needed will differ based on the interpolation operation (in this application the operation is bilinear interpolation, in the prior art the operation is a bicubic interpolation), and the size of the input and output data. (See MPEP 2144.05 for more information regarding ranges)
In regards to claim 3, Nakazono in view of Kwon teaches the image processing device of claim 2, wherein the buffer sequentially stores the pixel data in the line memories according to the pixel unit. (Nakazono Paragraph [0037] “A predetermined amount of the output image data, which is arranged in one row in a row direction within the output image data distributed in a two-dimensional matrix to be distortion-corrected and output, may be designated as one line. A predetermined amount of the input image data, which is arranged in one row in the row direction within the input image data distributed in a two-dimensional matrix, may be designated as one unit line.”)
In regards to claim 4, Nakazono in view of Kwon teaches the image processing device of claim 3, wherein, in response to all of the line memories being full, the buffer stores additional data in a line memory having oldest data, among the line memories. (Nakazono Paragraph [0019] “It is possible to store input image data of a larger number of unit lines than the predetermined number of unit lines by subsequently rewriting input image data of unit lines unused in an interpolation calculation.” Examiner note: In this reference, the input image data that is unused can be image data that has already been used by the operation, and therefore would be the oldest data.)
In regards to claim 5, Nakazono in view of Kwon teaches the image processing device of claim 4, wherein the distortion interpolator further comprises a starter configured to generate a start signal that triggers the distortion interpolation operation based on an amount of data that is stored in the line memories. (Nakazono Paragraph [0036] “the interpolation calculating unit generating the output image data by carrying out an interpolation calculation on the input image data stored in the input image data storage unit based on the coordinates of the input image data stored in the correction information storage unit when the amount of the input image data stored in the input image data storage unit is greater than or equal to an amount necessary for the interpolation calculation.” Examiner note: The start signal here is analogous to when the input data stored is greater than the amount necessary.)
In regards to claim 6, Nakazono in view of Kwon teaches the image processing device of claim 5, wherein the starter outputs the start signal in response to the number of line memories being equal to an amount necessary to start calculations (Nakazono Paragraph [0069] “The interpolation calculating unit 525 generates output image data by carrying out the interpolation calculation on the input image data stored in the line memory unit 523 in response to the control signal for starting the interpolation calculation input from the memory control unit 524.”).
Nakazono in view of Kwon fails to teach the pixel data being equal to the addition of twice the maximum number of distortion lines related to the target pixel and half the number of lines used for the distortion interpolation operation.
However, Nakazono in view of Kwon does teach Nakazono Paragraph [0019] “storage areas for a predetermined plurality of unit lines “ and Nakazono Paragraph [0068] “If input image data of the number of unit lines of the post-ULB and the pre-ULB to be used when the interpolation calculation is carried out is provided within the line memory unit 523, a control signal for starting the interpolation calculation is output to the interpolation calculating unit 525.” which suggests that the number of line memories can be a range of values and can increase or decrease based on the amount of data needed. It would have been obvious to one of ordinary skill in the art to arrive at the claimed range as a result of routine optimization. One of ordinary skill in the art would have had a reasonable expectation of success to formulate the claimed range because Nakazono Paragraph [0021] “FIG. 10 shows a storage area capable of storing input image data for 16 lines (unit lines) included as the line memory unit 523, a line of input image data before a distortion correction corresponding to output image data to be interpolated when output image data of one row (line) is output, and a range (the number of unit lines) of input image data necessary for an interpolation calculation. In the example shown in FIG. 10, an example in which output image data is generated by the bicubic interpolation using a line where input image data before a distortion correction corresponding to the output image data is stored, input image data of its previous line, and input image data for up to two subsequent lines.”, which suggests that the number of lines needed will differ based on the interpolation operation (in this application the operation is bilinear interpolation, in the prior art the operation is a bicubic interpolation), and the size of the input and output data.
In regards to claim 7, Nakazono in view of Kwon teaches the image processing device of claim 6, wherein the start signal includes coordinate information of the target pixel. (Nakazono Paragraph [0038] “The range calculating unit may designate the unit line including the coordinates of the input image data corresponding to the output image data for which the interpolation calculation is first carried out by the interpolation calculating unit within the output image data included in the line to be distortion-corrected and output as a base.” Examiner note: The unit line including the coordinates is equivalent to the coordinate information of the target pixel, since it is the first coordinates used in response to the start signal.)
In regards to claim 8, Nakazono in view of Kwon teaches the image processing device of claim 5, wherein the distortion interpolator further comprises a buffer reader configured to receive the start signal and generate position information indicating a position of the interpolation data that are stored in the line memories based on a display coordinate of the target pixel and a distortion coordinate of the target pixel. (Nakazono Paragraph [0036] “The distortion correcting unit may include a distortion correction coordinate transforming unit that obtains coordinates indicating a position of the input image data corresponding to a position of the output image data”; Paragraph [0038] “The range calculating unit may designate the unit line including the coordinates of the input image data corresponding to the output image data for which the interpolation calculation is first carried out by the interpolation calculating unit within the output image data included in the line to be distortion-corrected and output as a base.” Examiner note: The first excerpt shows that the unit includes the coordinate of both the current display coordinate of the pixel and a desired distortion coordinate of the pixel. The second excerpt shows that the start signal includes the coordinate information.)
In regards to claim 9, Nakazono in view of Kwon teaches the image processing device of claim 8, wherein the position information includes a vertical coordinate indicating a line memory, among the line memories, in which the interpolation data is stored and a horizontal coordinate indicating a horizontal direction coordinate in which the interpolation data is stored in the line memory. (Nakazono Paragraph [0038] “The range calculating unit may designate the unit line including the coordinates of the input image data corresponding to the output image data for which the interpolation calculation is first carried out by the interpolation calculating unit within the output image data included in the line to be distortion-corrected and output as a base. The range calculating unit may calculate a first number of unit lines indicating the number of units maximally separated in a first direction perpendicular to the unit line serving as the base and a second number of unit lines indicating the number of unit lines maximally separated in a second direction reverse to the first direction as information regarding a range of the input image data, for the unit lines including the coordinates of the input image data corresponding to the output image data for which the interpolation calculation is subsequently carried out by the interpolation calculating unit.” Examiner note: This excerpt shows that the coordinate information contains two parts, the first number is analogous to the horizontal coordinate, and the second number is analogous to the vertical coordinate.)
In regards to claim 25, Nakazono in view of Kwon renders obvious the claim limitations as in the consideration of claim 1.
In regards to claim 26, Nakazono in view of Kwon renders obvious the claim limitations as in the consideration of claims 8, and 25.
Claims 17 and 20-22 are rejected under 35 U.S.C. 103 as being unpatentable over Nakazono in view of Kwon, and further in view of “A Novel Approach to Real-time Bilinear Interpolation” (hereinafter referred to by its primary author, Gribbon).
In regards to claim 17, Nakazono in view of Kwon teaches the image processing device of claim 8, but fails to teach a clock signal manager configured to apply a first clock signal that is used for the distortion interpolation operation to the distortion interpolator and apply a second clock signal that is at least two times faster than the first clock signal to the buffer.
However, Gribbon teaches a clock signal manager configured to apply a first clock signal that is used for the distortion interpolation operation to the distortion interpolator and apply a second clock signal that is at least two times faster than the first clock signal to the buffer. (Gribbon Section 2.2 “Bilinear interpolation requires simultaneous access to four pixels from the input image (see figure (2) and equation (3)). However, only a single access can be made to the frame buffer per clock cycle. Possible alternatives to deal with this problem are the use multiport RAM, multiple RAM banks in parallel or using a faster RAM clock to read multiple locations in a single pixel clock cycle.” Examiner note: In this reference, the first clock signal that is being applied to the distortion interpolator would be the pixel clock cycle, since the interpolation calculations are done on each pixel clock cycle [see Section 2.1]. The second clock signal that is being applied to the buffer would be the faster ram clock cycle that can read multiple location in a single pixel clock cycle. This clock would have to be at least twice as fast as the first clock, since there is more than 1 piece of data being read each cycle.)
Gribbon is considered analogous to the claimed invention because they are both in the same field of pixel interpolation. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Nakazono in view of Kwon to include the teachings of faster RAM clock cycles of Gribbon, to provide the benefit of a system that can access multiple pixel values (4 in the case of bilinear interpolation) in a single clock cycle (Gribbon Section 2.2 “Bilinear interpolation requires simultaneous access to four pixels from the input image (see figure (2) and equation (3)). However, only a single access can be made to the frame buffer per clock cycle... [A] Possible alternative… using a faster RAM clock to read multiple locations in a single pixel clock cycle.”)
In regards to claim 20, Nakazono in view of Kwon teaches the image processing device of claim 8, but fails to teach wherein the buffer reader generates weighted value information that is used for the distortion interpolation operation based on a distortion value indicating a difference between the display coordinate of the target pixel and the distortion coordinate of the target pixel, and wherein the distortion interpolator corrects a result of the distortion interpolation operation based on the weighted value information.
However, Gribbon teaches wherein the buffer reader generates weighted value information that is used for the distortion interpolation operation based on a distortion value indicating a difference between the display coordinate of the target pixel and the distortion coordinate of the target pixel, and wherein the distortion interpolator corrects a result of the distortion interpolation operation based on the weighted value information. (Gribbon Figure 2; Section 1 “The algorithm obtains the pixel value by taking a weighted sum of the pixel values of the four nearest neighbors surrounding the calculated location as shown below in figure 2 and equation (3):”)
Gribbon is considered analogous to the claimed invention because they are both in the same field of pixel interpolation. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Nakazono in view of Kwon to include the teachings of using the fractional value of calculated pixel locations of Gribbon, to provide the benefit of reducing error that causes artefacts in interpolated images (Gribbon Section 1 “As the coordinates calculated by the inverse mapping function are rarely integer values, their location lies “between” the pixels in the original image. Simplistic methods that round or truncate the fractional component of the calculated coordinate can introduce substantial error in the pixel location, one effect of which distorts lines by producing jagged-edge artefacts.)
In regards to claim 21, Nakazono in view of Kwon and Gribbon teaches the image processing device of claim 20, wherein the buffer reader generates the position information based on an integer part of the distortion value and generates the weighted value information based on a fractional part of the distortion value. (Gribbon Figure 2; Equation 3 Examiner note: The integer components x_i and y_i and the fractional components x_f and y_f are both used to calculate the position information, as can be seen in equation 3.)
In regards to claim 22, Nakazono in view of Kwon and Gribbon teaches the image processing method of claim 20, wherein the buffer reader delays an output of the weighted value information and outputs the weighted value information at the same timing as the interpolation data. (Gribbon Figure 6 Examiner note: In this pipeline, the interpolation coefficients are based on the fractional portions of the x and y coordinate (analogous to the weighted value information) as can be seen in equations 4 and 5. The retrieved pixels from the buffer and previous calculations are analogous to the interpolation data. These coefficients are output at the same time as the retrieved pixels in the pipeline, so that the distortion interpolator can correctly calculate the interpolated pixel, and the calculation for the interpolation coefficients is delayed until all the other retrieved pixels are output.)
Claims 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over Nakazono in view of Kwon and Gribbon, and further in view of US20150261250 (hereinafter referred to by its primary author, Jung).
In regards to claim 18, Nakazono in view of Kwon and Gribbon teaches the image processing device of claim 17, but fails to teach 17, wherein the clock signal manager comprises a first clock converter configured to increase a speed of the first clock signal by a speed of the second clock signal, and wherein the first clock converter increases a clock speed for the position information.
However, Jung teaches wherein the clock signal manager comprises a first clock converter configured to increase a speed of the first clock signal by a speed of the second clock signal, and wherein the first clock converter increases a clock speed for the position information. (Jung Paragraph [0050] “The clock scaler 110 may receive a first clock signal SCLK, a first frequency control signal FCS1, and a second frequency control signal FCS2. The clock scaler 110 may generate a second clock signal BCLK based on the first clock signal SCLK, the first frequency control signal FCS1, and the second frequency control signal FCS2… The frequency of the second clock signal BCLK may increase based on the first frequency control signal FCS1…”)
Jung is considered analogous to the claimed invention because they are both in the same field of management of memory with clock signals. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Nakazono in view of Kwon and Gribbon to include the teachings of increasing and decreasing clock cycles of Jung, to provide the benefit of decreasing power consumption when needed through control of clock cycles (Jung Paragraph [0005] “Recently, a large number of components are integrated into one semiconductor device, and operation speed of the semiconductor device is gradually increased. Thus, reduction of power consumption in the semiconductor device is required.”)
In regards to claim 19, Nakazono in view of Kwon, Gribbon and Jung teaches the image processing device of claim 18, wherein the clock signal manager further comprises a second clock converter configured to decrease the speed of the second clock signal by the speed of the first clock signal, and wherein the second clock converter decreases a clock speed for the interpolation data. (Jung Paragraph [0050] “The frequency of the second clock signal BCLK may increase based on the first frequency control signal FCS1, and the frequency of the second clock signal BCLK may decrease based on the second frequency control signal FCS2.”)
Claims 23-24 are rejected under 35 U.S.C. 103 as being unpatentable over Nakazono in view of Kwon, and further in view of WO2019245117 (hereinafter referred to by its primary author, Lee).
In regards to claim 23, Nakazono teaches the image processing device of claim 8, but fails to teach wherein the distortion interpolator further comprises a demosaicing component configured to determine a color of pixels of the interpolation data based on the display coordinate and the position information and configured to change pixel data for a position of pixels having a color that is different from a color of the target pixel to pixel data of pixels having a color the same as the target pixel.
However, Lee teaches wherein the distortion interpolator further comprises a demosaicing component configured to determine a color of pixels of the interpolation data based on the display coordinate and the position information and configured to change pixel data for a position of pixels having a color that is different from a color of the target pixel to pixel data of pixels having a color the same as the target pixel. (Lee Page 5 Paragraph 5 “Demosaicing filtering requiring 2D convolution may be performed as shown in FIG. 2. Specifically, demosaicing filtering places a convolution kernel at each digital image pixel location, multiplies the coefficients by each image pixel value, and sums the results of the product to derive the result. Demosaicing filtering performs the above-described process on all pixels of the digital image to convert the Bayer image to an RGB image.”)
Lee is considered analogous to the claimed invention because they are both in the same field of image demosaicing. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Nakazono in view of Kwon to include the teachings of changing the color pixels based on the surrounding pixels of Lee, to provide the benefit of reduced computational complexity by reusing values from a previous calculation, thus reducing the amount of hardware required. (Lee Page 9 Paragraph 4 “In this case, if the filter coefficient of demosaicing is completely overlapped not only for each line but also for each pixel, the complexity of the demosaicing hardware architecture can be further reduced.”)
In regards to claim 24, Nakazono in view of Kwon and Lee teaches the image processing device of claim 23, wherein the demosaicing component comprises a first demosaicing component configured to change pixel data for a red pixel or blue pixel to pixel data for a green pixel, and a second demosaicing component configured to change the pixel data for the green pixel to the pixel data for the red pixel or the blue pixel. (Lee Page 8 Paragraph 1 “The first convolution unit 632b calculates the green pixel value using the first convolution kernel when the current pixel is blue and the green pixel value is to be calculated or when the current pixel is red and the green pixel value is to be calculated.”; Page 8 Paragraph 3 “The third convolution unit 632d has a current pixel of Green, left and right blue, and a top and bottom red, and wants to calculate a red pixel value, or a current pixel is green, left and right red, top and bottom blue, and a blue pixel. If a value is to be calculated, a red pixel value or a blue pixel value is calculated using a third convolution kernel.”)
Allowable Subject Matter
Claims 10-16 and 27-28 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
In regards to claim 10, Nakazono teaches a first horizontal coordinate, but does not teach a second horizontal coordinate that indicates a position of a pixel within the pixels that share the first horizontal coordinate and the vertical coordinate. This claim requires that the pixels be stored in line memory, that each line memory must contain a number of “blocks” where pixels are stored, and that those “blocks” each contain at least one other pixel value, wherein the second horizontal coordinate determines where inside the “block” the target pixel is located. The prior art of record fails to teach these limitations, alone or in combination.
In regards to claims 11-16, these claims are objected to because of their dependency on claim 10.
In regards to claim 27, the limitation of “a second horizontal coordinate indicating a position at which pixel data for the target pixel is stored, among pixel data for a plurality of pixels that are stored in the first horizontal coordinate: requires the same limitations as claim 10, and is therefore objected to.
In regards to claim 28, this claim is objected to because of its dependency on claim 27.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
US20050122403 teaches a method of correcting a distortion caused by vibration from shaky hands. Image data is stored in memory lines and sequentially the distortion is corrected in the memory.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CALEB LOGAN ESQUINO whose telephone number is (703)756-1462. The examiner can normally be reached M-Th 7:00AM-5:00PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Bee can be reached at (571) 270-5183. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CALEB L ESQUINO/Examiner, Art Unit 2677
/ANDREW W BEE/Supervisory Patent Examiner, Art Unit 2677