DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11/26/2025 has been entered.
Response to Arguments
Applicant's arguments with respect to claims 21-23, 25-29, 31-36 and 38-43 have been considered but are moot in view of the new ground(s) of rejection.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 21 and 35 is/are rejected under 35 U.S.C. 103 as being unpatentable over Crenshaw et al. (“Crenshaw”) (U.S. PG Publication No. 2014/0098886) in view of Lee et al. (“Lee”) (U.S. PG Publication No. 2003/0128760).
In regards to claim 21, as seen in ¶0098 of Crenshaw, for local spatial regions that contain a large magnitude of motion [thus motion vectors of high magnitude], the system will reduce the local spatial resolution and bit depth in order to improve the perceived quality of the local region, this is taken in view of ¶0052 wherein as an example a region which has high magnitude motion may be reduced from 64 pixels down to 16 pixels.
According to applicant’s specification, geometrical data is data which identifies tiles having objects as described in ¶0009 of the originally filed specification, with 0018 further describing that geometric data is used to identify objects in an image, and 0025 further stating that objects are identified based on geometrical data. In other words, objects are identified and associated with geometric data, and thus object identification may be interpreted as the identification of the geometry of objects in the image data.
Although Crenshaw discusses motion blur across images being caused by objects that move quickly enough relative to the camera’s shutter duration in captured video data as described in ¶0004-0006 and 0026, actual object identification within incoming image data is not explicitly described. In a similar endeavor Lee not only teaches that motion estimation techniques can identify objects and their motion, but that this may be done across frames as described in the Abstract, ¶0009, 0011, thereby more appropriately associating the movement of objects across frames with that of objects that were indeed identified within the image data [and thus their representative geometric data].
Therefore together Crenshaw and Lee teach a device, comprising:
a processor (See ¶0030) configured to:
render a plurality of frames for display using geometric data from an executing application (See ¶0098 in view of 0052 of Crenshaw in view of the Abstract, 0009 and 0011 of Lee) by selectively rendering one or more tiles of a plurality of tiles of a first frame of the plurality of frames at a reduced resolution in response to a magnitude of a motion vector for the one or more tiles exceeding a first threshold value (See ¶0098 of Crenshaw wherein local spatial regions that contain a large magnitude of motion [thus motion vectors of high magnitude], the system will reduce the local spatial resolution and bit depth in order to improve the perceived quality of the local region, this is taken in view of ¶0052 of Crenshaw wherein as an example a region which has high magnitude motion may be reduced from 64 pixels down to 16 pixels; it is noted that motion based information within the invention is incorporated into motion vector signaling through the use of codecs such as H.264 and SVC as seen in ¶0159-0160 of Crenshaw which would directly use motion vector data as representations of the motion data, as would readily understood by one of ordinary skill in the art given that this is within the video compression field, a field by which the most basic of basics revolves around blocks of image data and motion vector data between them), wherein each tile comprises one or more pixels, sub-pixels, or fragments (See ¶0098 in view of 0052 of Crenshaw wherein a tile may be taught as a region [or even blocks and/or packed blocks as described in ¶0080-0081], which in an example given may have 64 pixels and is reduced to 16 pixels).
It would have been obvious to a person of ordinary skill in the art, and before the effective filing date of the claimed invention, to incorporate the teaching of Lee into Vlachos because it allows for the identification of object blocks in a frame and then accordingly identifying a best match within another frame, from which motion vector data is thus created as described in at least ¶0011, thus allowing for more efficient compression of video data across frames.
Although not used for the rejection, the applicant is directed towards additional prior art which shows areas that experience large amounts of movement will then be reduced in resolution as seen in ¶0036-0039 and 0048 of Cohen et al. [U.S. PG Publication No. 2011/0013692], ¶0023 of Tanaka et al. [U.S. PG Publication No. 2011/0064276] and ¶0073 of Chen et al. [U.S. PG Publication No. 2010/0316126].
In regards to claim 35, the claim is rejected under the same basis as claim 21 by Crenshaw in view of Lee.
Claim(s) 22 is/are rejected under 35 U.S.C. 103 as being unpatentable over Crenshaw et al. (“Crenshaw”) (U.S. PG Publication No. 2014/0098886) in view of Lee et al. (“Lee”) (U.S. PG Publication No. 2003/0128760) and Possos et al. (“Possos”) (U.S. PG Publication No. 2018/0343448).
In regards to claim 22, Crenshaw fails to teach the device of claim 21, further comprising: a motion estimator engine configured to generate a motion vector field comprising a motion vector for each tile of the first frame, the motion vectors based on a comparison between corresponding tiles of a second frame of the plurality of frames immediately preceding the first frame and a third frame of the plurality of frames immediately preceding the second frame.
In a similar endeavor Possos teaches a motion estimator engine configured to generate a motion vector field comprising a motion vector for each tile of the first frame (See ¶0062 and FIG. 14), the motion vectors based on a comparison between corresponding tiles of a second frame of the plurality of frames immediately preceding the first frame and a third frame of the plurality of frames immediately preceding the second frame (See for example FIG. 6A and 20).
It would have been obvious to a person of ordinary skill in the art, and before the effective filing date of the claimed invention, to incorporate the teaching of Possos into Crenshaw because it allows for motion estimation to be performed including using adaptive motion compensated temporal filtering with two reference frames as seen in ¶0072 which may be used in an IPP structure GoP, thus producing continual motion estimation across the GoP unidirectionally time-wise.
Claim(s) 23 and 39 is/are rejected under 35 U.S.C. 103 as being unpatentable over Crenshaw et al. (“Crenshaw”) (U.S. PG Publication No. 2014/0098886) in view of Lee et al. (“Lee”) (U.S. PG Publication No. 2003/0128760) and Possos et al. (“Possos”) (U.S. PG Publication No. 2018/0343448), in further view of Krishnamachari et al. (“Krishna”) (U.S. PG Publication No. 2002/0141501).
In regards to claim 23, Crenshaw fails to teach the device of claim 22, wherein the motion estimator engine is further to: estimate magnitudes of differences in pixel values for each corresponding tile of the first, second, and third frames; and wherein the processor is configured to render each tile at a resolution that is based at least in part on the magnitudes of differences in pixel values.
That is, difference in pixel values is what creates motion vector data, however for the purposes of compact prosecution, an additional reference will be provided which more explicitly describes this.
In a similar endeavor Possos teaches estimate magnitudes of differences in pixel values for each corresponding tile of the first, second, and third frames (See FIG. 6A and 20).
It would have been obvious to a person of ordinary skill in the art, and before the effective filing date of the claimed invention, to incorporate the teaching of Possos into Crenshaw because it allows for motion estimation to be performed including using adaptive motion compensated temporal filtering with two reference frames as seen in ¶0072 which may be used in an IPP structure GoP, thus producing continual motion estimation across the GoP unidirectionally time-wise.
In a similar endeavor Krishna teaches wherein the processor is configured to render each tile at a resolution that is based at least in part on the magnitudes of differences in pixel values (See ¶0037 and 0060).
It would have been obvious to a person of ordinary skill in the art, and before the effective filing date of the claimed invention, to incorporate the teaching of Krishna into Crenshaw because it allows for the resolution based on pixel data between succeeding target frames as described in the abstract, while still maintaining accuracy as described in at least ¶0007.
In regards to claim 39, the claim is rejected under the same basis as claim 23 by Crenshaw in view of Lee and Possos, in further view of Krishna.
Claim(s) 25 and 38 is/are rejected under 35 U.S.C. 103 as being unpatentable over Crenshaw et al. (“Crenshaw”) (U.S. PG Publication No. 2014/0098886) in view of Lee et al. (“Lee”) (U.S. PG Publication No. 2003/0128760) and Chen et al. (“Chen”) (U.S. PG Publication No. 2018/0098089).
In regards to claim 25, Crenshaw fails to teach the device of claim 21, wherein the processor is configured to selectively render one or more tiles of a plurality of tiles of the first frame at an increased resolution in response to a magnitude of a motion vector for the one or more tiles being less than a second threshold value by rendering less than one pixel along a direction of motion with the same pixel value for a tile having a motion vector smaller than a threshold value.
In a similar endeavor Chen teaches wherein the processor is configured to selectively render one or more tiles of a plurality of tiles of the first frame at an increased resolution in response to a magnitude of a motion vector for the one or more tiles being less than a second threshold value by rendering less than one pixel along a direction of motion with the same pixel value for a tile having a motion vector smaller than a threshold value (See ¶0116-0117 wherein an example of this may be seen, in this case a sub-pixel [which is less than a pixel] may be rendered along a direction of motion as a tile [sub-pixel accuracy] when differences between it and the pixels are smaller than a threshold value, this is taken in view of ¶0045 which describes that smaller blocks can provide better resolution, thus if pixel accuracy is increased for specific regions, then those regions are effectively a higher level of detail and thus a higher resolution, as this is based on a motion vector resolution as described in ¶0049).
It would have been obvious to a person of ordinary skill in the art, and before the effective filing date of the claimed invention, to incorporate the teaching of Chen into Crenshaw because it allows for pixel accuracy depending on the motion values of sub-pixels within their respective pixels as described in ¶0017, thus improving image quality.
In regards to claim 38, the claim is rejected under the same basis as claim 25 by Crenshaw in view of Lee and Chen.
Claim(s) 26 and 40 is/are rejected under 35 U.S.C. 103 as being unpatentable over Crenshaw et al. (“Crenshaw”) (U.S. PG Publication No. 2014/0098886) in view of Lee et al. (“Lee”) (U.S. PG Publication No. 2003/0128760) and Chen et al. (“Chen”) (U.S. PG Publication No. 2018/0098089), in further view of Gordon (U.S. PG Publication No. 2006/0256855).
In regards to claim 26, Crenshaw fails to teach the device of claim 25, wherein the processor is configured to render each tile at a resolution that is based at least in part on at least one of a presence of skin color or a presence of an object or portion of an object within each tile.
In a similar endeavor Gordon teaches wherein the processor is configured to render each tile at a resolution that is based at least in part on at least one of a presence of skin color or a presence of an object or portion of an object within each tile (See ¶0030).
It would have been obvious to a person of ordinary skill in the art, and before the effective filing date of the claimed invention, to incorporate the teaching of Gordon into Crenshaw because it allows for increased resolution for human skin as it may be desired that human features are to be enhanced as described in ¶0030.
In regards to claim 40, the claim is rejected under the same basis as claim 26 by Crenshaw in view of Lee and Chen, in further view of Gordon.
Claim(s) 27 is/are rejected under 35 U.S.C. 103 as being unpatentable over Crenshaw et al. (“Crenshaw”) (U.S. PG Publication No. 2014/0098886) in view of Lee et al. (“Lee”) (U.S. PG Publication No. 2003/0128760) and Zhang et al. (“Zhang”) (U.S. PG Publication No. 2020/0058152).
In regards to claim 27, Crenshaw fails to teach the device of claim 21, wherein the processor is configured to render each tile at a resolution that is at least in part based on a frame rate requirement for the executing application.
In a similar endeavor Zhang teaches wherein the processor is configured to render each tile at a resolution that is at least in part based on a frame rate requirement for the executing application (See ¶0098 in view of FIG. 9).
It would have been obvious to a person of ordinary skill in the art, and before the effective filing date of the claimed invention, to incorporate the teaching of Zhang into Krishna because it allows for it allows for proper bandwidth control through the dynamic adjustments of rendering properties of resolution vs frame rate as described in ¶0098.
Claim(s) 28 is/are rejected under 35 U.S.C. 103 as being unpatentable over Vlachos et al. (“Vlachos”) (U.S. PG Publication No. 2020/0111195) in view of Lee et al. (“Lee”) (U.S. PG Publication No. 2003/0128760) and Crenshaw et al. (“Crenshaw”) (U.S. PG Publication No. 2014/0098886).
In regards to claim 28, as seen in ¶0024 and 0036-0037 in view of FIG. 1 and 15 of Vlachos, the system re-projects frames that further account for motion of objects that move or animate frame-to-frame by estimating motion [e.g., direction and motion] of objects over multiple frames that take the form of motion vectors, with ¶0038 further showing pixel values corresponding to objects in frames to be determined of the plurality of frames.
According to applicant’s specification, geometrical data is data which identifies tiles having objects as described in ¶0009 of the originally filed specification, with 0018 further describing that geometric data is used to identify objects in an image, and 0025 further stating that objects are identified based on geometrical data. In other words, objects are identified and associated with geometric data, and thus object identification may be interpreted as the identification of the geometry of objects in the image data.
Although Vlachos discusses object movement and direction across images, as well as their pixel data [e.g., luminance and chroma data] as described above and also in at least ¶0116-0120, with such data being represented across arrays of motion vectors, actual object identification within incoming image data is not explicitly described. In a similar endeavor Lee not only teaches that motion estimation techniques can identify objects and their motion, but that this may be done across frames as described in the Abstract, ¶0009, 0011, thereby more appropriately associating the movement of objects across frames with that of objects that were indeed identified within the image data [and thus their representative geometric data].
Therefore, together Vlachos and Lee teach a processing system comprising:
a memory to store a first displayed frame and a second displayed frame of a plurality of frames being rendered based on geometric data from an executing application (See ¶0142-0145 and FIG. 16 of Vlachos with regards to memory, also see FIG. 1 of Vlachos with regards the first, second and plurality of frames, finally see Abstract, 0009 and 0011 of Lee with regards to object data being identified [thus associated geometric data]);
a motion estimation processor engine configured to generate a motion vector field comprising a plurality of motion vectors for a frame of the plurality of frames based on a comparison of the first displayed frame with the second displayed frame (See ¶0024 and 0036-0037 in view of FIG. 1 and 15 of Vlachos, also see 0116-0121 of Vlachos as an additional example).
It would have been obvious to a person of ordinary skill in the art, and before the effective filing date of the claimed invention, to incorporate the teaching of Lee into Vlachos because it allows for the identification of object blocks in a frame and then accordingly identifying a best match within another frame, from which motion vector data is thus created as described in at least ¶0011, thus allowing for more efficient compression of video data across frames.
Vlachos, however, fails to additionally teach a processor configured to: render the plurality of frames by selectively rendering one or more tiles of a plurality of tiles of the frame of the plurality of frames at a reduced resolution in response to a magnitude of a motion vector of the plurality of motion vectors for the one or more tiles exceeding a first threshold value, wherein each tile comprises one or more pixels, sub-pixels, or fragments.
In a similar endeavor Crenshaw teaches a processor (See ¶0030) configured to:
render the plurality of frames by selectively rendering one or more tiles of a plurality of tiles of the frame of the plurality of frames at a reduced resolution in response to a magnitude of a motion vector of the plurality of motion vectors for the one or more tiles exceeding a first threshold value (See ¶0098 wherein local spatial regions that contain a large magnitude of motion [thus motion vectors of high magnitude], the system will reduce the local spatial resolution and bit depth in order to improve the perceived quality of the local region, this is taken in view of ¶0052 wherein as an example a region which has high magnitude motion may be reduced from 64 pixels down to 16 pixels; it is noted that motion based information within the invention is incorporated into motion vector signaling through the use of codecs such as H.264 and SVC which would directly use motion vector data as representations of the motion data, as would readily understood by one of ordinary skill in the art given that this is within the video compression field, a field by which the most basic of basics revolves around blocks of image data and motion vector data between them), wherein each tile comprises one or more pixels, sub-pixels, or fragments (See ¶0098 in view of 0052 wherein a tile may be taught as a region, which in an example given may have 64 pixels and is reduced to 16 pixels).
Although not used for the rejection, the applicant is directed towards additional prior art which shows areas that experience large amounts of movement will then be reduced in resolution as seen in ¶0036-0039 and 0048 of Cohen et al. [U.S. PG Publication No. 2011/0013692], ¶0023 of Tanaka et al. [U.S. PG Publication No. 2011/0064276] and ¶0073 of Chen et al. [U.S. PG Publication No. 2010/0316126].
Claim(s) 29 and 32 is/are rejected under 35 U.S.C. 103 as being unpatentable over Vlachos et al. (“Vlachos”) (U.S. PG Publication No. 2020/0111195) in view of Lee et al. (“Lee”) (U.S. PG Publication No. 2003/0128760) and Crenshaw et al. (“Crenshaw”) (U.S. PG Publication No. 2014/0098886), in further view of Possos et al. (“Possos”) (U.S. PG Publication No. 2018/0343448).
In regards to claim 29, Vlachos teaches the processing system of claim 28, wherein the motion vector field includes a motion vector for each tile of the frame of the plurality of frames.
In regards to claim 32, the claim is rejected under the same basis as claim 23 by Vlachos in view of Lee and Crenshaw, in further view of Possos.
Claim(s) 31 is/are rejected under 35 U.S.C. 103 as being unpatentable over Vlachos et al. (“Vlachos”) (U.S. PG Publication No. 2020/0111195) in view of Lee et al. (“Lee”) (U.S. PG Publication No. 2003/0128760) and Crenshaw et al. (“Crenshaw”) (U.S. PG Publication No. 2014/0098886), in further view of Chen et al. (“Chen”) (U.S. PG Publication No. 2018/0098089).
In regards to claim 31, the claim is rejected under the same basis as claim 25 by Vlachos in view of Lee and Crenshaw, in further view of Chen.
Claim(s) 33 is/are rejected under 35 U.S.C. 103 as being unpatentable over Vlachos et al. (“Vlachos”) (U.S. PG Publication No. 2020/0111195) in view of Lee et al. (“Lee”) (U.S. PG Publication No. 2003/0128760) and Crenshaw et al. (“Crenshaw”) (U.S. PG Publication No. 2014/0098886), in further view of Gordon (U.S. PG Publication No. 2006/0256855).
In regards to claim 33, the claim is rejected under the same basis as claim 26 by Vlachos in view of Lee and Crenshaw, in further view of Gordon.
Claim(s) 34 is/are rejected under 35 U.S.C. 103 as being unpatentable over Vlachos et al. (“Vlachos”) (U.S. PG Publication No. 2020/0111195) in view of Lee et al. (“Lee”) (U.S. PG Publication No. 2003/0128760) and Crenshaw et al. (“Crenshaw”) (U.S. PG Publication No. 2014/0098886), in further view of Zhang et al. (“Zhang”) (U.S. PG Publication No. 2020/0058152).
In regards to claim 34, the claim is rejected under the same basis as claim 26 by Vlachos in view of Lee and Crenshaw, in further view of Zhang.
Claim(s) 36 is/are rejected under 35 U.S.C. 103 as being unpatentable over Crenshaw et al. (“Crenshaw”) (U.S. PG Publication No. 2014/0098886) in view of Lee et al. (“Lee”) (U.S. PG Publication No. 2003/0128760) and Brueckner et al. (“Brueckner”) (U.S. PG Publication No. 2018/0189574).
In regards to claim 36, Crenshaw teaches the method of claim 35, further comprising: rendering each tile of the first frame at a resolution that is based at least in part on a presence of an object or portion of an object within the tile based on the geometric data.
In a similar endeavor Brueckner teaches rendering each tile of the first frame at a resolution that is based at least in part on a presence of an object or portion of an object within the tile based on the geometric data (See ¶0019 and 0058 in view of FIG. 1 and 7 wherein the resolution of the tiles [regions] of the first frame may depend on whether they are regions of interest or regions outside of regions of interest, the regions of interest themselves being those where objects are detected; it is noted that the regions of interest are determined based at least in part of one or more environmental factors as seen in FIG. 1 and 7, such as a sensed object or location data of one or more sensed objects as described in ¶0022, as such in these regions the environmental factors are what determine which regions are considered to be regions of interest and which are not, it is additionally noted that objects are indeed identified and detected, and thus their corresponding geometric data).
It would have been obvious to a person of ordinary skill in the art, and before the effective filing date of the claimed invention, to incorporate the teaching of Brueckner into Crenshaw with a well-known object detection system that shows lines and shapes as part of its analysis in detecting objects because such techniques allow for consideration of discrimination between objects and non-objects in image data through the use of shapes and lines as part of the process.
Claim(s) 41 and 43 is/are rejected under 35 U.S.C. 103 as being unpatentable over Crenshaw et al. (“Crenshaw”) (U.S. PG Publication No. 2014/0098886) in view of Lee et al. (“Lee”) (U.S. PG Publication No. 2003/0128760) and Lim et al. (“Lim”) (U.S. PG Publication No. 2012/0314771), in further view of Siminoff (U.S. Patent No. 10,939,120).
In regards to claim 41, Crenshaw fails to teach the device of claim 21, wherein the processor is configured to: reduce a resolution of the one or more tiles by rendering each pixel of a group of pixels of the one or more tiles along a direction of the motion vector with the same pixel color value.
In a similar endeavor Lim and Siminoff teach wherein the processor is configured to: reduce a resolution of the one or more tiles by rendering each pixel of a group of pixels of the one or more tiles along a direction of the motion vector with the same pixel color value (See ¶0222 of Lim wherein this is taken in view of Crenshaw’s teaching of reducing resolution of the group of pixels if past such a threshold, which is also more specifically taught by Siminoff as seen in col. 24, li. 37-47, which teaches that such large movement together indicate that they are part of the same object as taught by Lim and may be set to the same color value in order to reduce the amount of data to represent a portion of the frame as described by Siminoff).
It would have been obvious to a person of ordinary skill in the art, and before the effective filing date of the claimed invention, to incorporate the teaching of Lim and Siminoff into Crenshaw because it allows for object motion of a pixel group using a pixel of interest at the center of the object wherein the magnitude of the motion vector is greater than a predetermined threshold value as described in ¶0222 of Lim, thus introducing efficiency via calculating movement of a group through the calculation of a single pixel of interest, and where such a combination of data may be used to reduce the amount of data used to represent the portion of the frame as described in col. 24, li. 37-47 of Siminoff.
In regards to claim 43, the claim is rejected under the same basis as claim 41 by Crenshaw in view of Lee and Lim.
Claim(s) 42 is/are rejected under 35 U.S.C. 103 as being unpatentable over Vlachos et al. (“Vlachos”) (U.S. PG Publication No. 2020/0111195) in view of Lee et al. (“Lee”) (U.S. PG Publication No. 2003/0128760) and Crenshaw et al. (“Crenshaw”) (U.S. PG Publication No. 2014/0098886) and Lim et al. (“Lim”) (U.S. PG Publication No. 2012/0314771), in further view of Siminoff (U.S. Patent No. 10,939,120).
In regards to claim 42, the claim is rejected under the same basis as claim 41 by Vlachos in view of Lee and Crenshaw, in further view of Lim and Siminoff.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to EDEMIO NAVAS JR whose telephone number is (571)270-1067. The examiner can normally be reached M-F, ~ 9 AM -6 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Joseph Ustaris can be reached at 5712727383. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
EDEMIO NAVAS JR
Primary Examiner
Art Unit 2483
/EDEMIO NAVAS JR/Primary Examiner, Art Unit 2483