DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Information Disclosure Statement The information disclosure statement s (IDS s ) submitted on 3/29/2023, 5/2/2023, 3/4/2024, 6/26/2025, and 12/23/2025 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement s are being considered by the examiner. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless –(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-4, 9, 11-12, and 14-16 are rejected under 35 U.S.C. 102(a)(1) as anticipated by Kotsuji (JP 2019053170 A, April 4, 2019), hereinafter Kotsuji. Regarding claim 1 , Kotsuji discloses a computer-implemented information processing method (Kotsuji ¶0020: "The control unit 4 controls the operation of the multifunction peripheral 100. The control unit 4 includes a CPU 4a and an image processing unit 4b.") comprising: determining, based on musical instrument information indicative of a musical instrument (Kotsuji ¶0018: "The photographing unit 1 photographs (video and audio) the playing part of a person practicing a musical instrument.") , a target part of a body of a first player (Kotsuji ¶0018: "The playing part is the part of the body that comes into contact with the instrument 3.") , the first player playing the musical instrument indicated by the musical instrument information (Kotsuji ¶0018: "The playing part is the part of the body that is moved to produce sound from the instrument 3 and to change the sound produced.") ; and acquiring image information indicative of imagery of the determined target part (Kotsuji ¶0018: "When the instrument 3 is a piano, the playing parts are the hands (fingers). The photographing unit 1 is installed in a position where it can photograph the hand (fingers).") . Regarding claim 2 , Kotsuji discloses a computer-implemented information processing method comprising the features of claim 1 as discussed above. Kotsuji further discloses transmitting the acquired image information to an external apparatus (Kotsuji ¶0058: "The user may also view the composite data 9 using a portable communication device that the user owns." Kotsuji ¶0057: "The composite data 9 is video data in which the performance parts of the model player and the performance parts of the instrument learner are arranged side by side.") . Regarding claim 3 , Kotsuji discloses a computer-implemented information processing method comprising the features of claim 1 as discussed above. Kotsuji further discloses that the determining the target part includes determining the target part based on the identified musical instrument information (Kotsuji ¶0017: "Furthermore, the musical score data 21 may specify the part of the musical piece that should be used for playing each note. For example, keyboard instruments are played with both hands. Examples of keyboard instruments include a piano and an electone. The left hand is generally used to play the lower notes. The right hand is generally used to play the lower notes. In the musical score data 21, a playing part may be defined for each note depending on the instrument 3.") . Regarding claim 4 , Kotsuji discloses a computer-implemented information processing method comprising the features of claim 3 as discussed above. Kotsuji further discloses that the related information includes at least one of: information indicative of sounds emitted from the musical instrument; information indicative of imagery of the musical instrument; information indicative of a musical score for the musical instrument; or information indicative of a combination of the musical instrument and a lesson schedule for the musical instrument (Kotsuji ¶0017: "Furthermore, the musical score data 21 may specify the part of the musical piece that should be used for playing each note. For example, keyboard instruments are played with both hands. Examples of keyboard instruments include a piano and an electone. The left hand is generally used to play the lower notes. The right hand is generally used to play the lower notes. In the musical score data 21, a playing part may be defined for each note depending on the instrument 3.") . Regarding claim 9 , Kotsuji discloses a computer-implemented information processing method comprising the features of claim 1 as discussed above. Kotsuji further discloses that determining the target part includes determining the target part based on the musical instrument information and sound information (Kotsuji ¶0026: "The microphone 11 collects the sound of the performance [and] the sound processor 12 converts the analog signal output by the microphone 11 into digital data (performance sound data 81)." The sound of the performance includes musical instrument information and sound information.) , the sound information being indicative of sounds emitted from the musical instrument indicated by the musical instrument information (Kotsuji ¶0052: "The control unit 4 evaluates the performance according to the selected performance part… the control unit 4 compares the notes in the musical score data 21 that correspond to the direction of the selected hand with the performance sound data 81 and performs an evaluation") . Regarding claim 11 , Kotsuji discloses a computer-implemented information processing method (Kotsuji ¶0020: "The control unit 4 controls the operation of the multifunction peripheral 100. The control unit 4 includes a CPU 4a and an image processing unit 4b.") comprising: determining, based on sound information (Kotsuji ¶0026: "The photographing unit 1 includes a microphone 11. The microphone 11 collects the sound of the performance [and] the sound processor 12 converts the analog signal output by the microphone 11 into digital data (performance sound data 81).") indicative of sounds emitted from a musical instrument (Kotsuji ¶0052: "The control unit 4 evaluates the performance according to the selected performance part… the control unit 4 compares the notes in the musical score data 21 that correspond to the direction of the selected hand with the performance sound data 81 and performs an evaluation") , a target part of a body of a first player (Kotsuji ¶0052: "For example, if the right hand is selected as the playing part") , the first player playing the musical instrument (Kotsuji ¶0052: "the control unit 4 determines whether each note to be played by the right hand in the musical score data 21 has been played correctly. The control unit 4 evaluates the accuracy based on the number of notes correctly played with the right hand and the total number of notes that should be played with the right hand.") ; and acquiring image information indicative of imagery of the determined target part (Kotsuji ¶0018: "When the instrument 3 is a piano, the playing parts are the hands (fingers). The photographing unit 1 is installed in a position where it can photograph the hand (fingers).") . Regarding claim 12 , Kotsuji discloses a computer-implemented information processing method comprising the features of claim 11. Kotsuji further discloses that determining the target part includes determining the target part based on a relationship between the sound information and musical-score information indicative of a musical score (Kotsuji ¶0017: "Furthermore, the musical score data 21 may specify the part of the musical piece that should be used for playing each note. For example, keyboard instruments are played with both hands. Examples of keyboard instruments include a piano and an electone. The left hand is generally used to play the lower notes. The right hand is generally used to play the lower notes. In the musical score data 21, a playing part may be defined for each note depending on the instrument 3.") . Regarding claim 14 , Kotsuji discloses a computer-implemented information processing method comprising the features of claim 1 as discussed above. Kotsuji further discloses that determining the target part includes determining the target part based on attention information indicative of an attention matter regarding playing the musical instrument (Kotsuji ¶0037: "The first setting screen 65a also has a third radio button RB3, a fourth radio button RB4, and a fifth radio button RB5. Any one of the third radio button RB3 to the fifth radio button RB5 is selected. When practicing with only the right hand, the third radio button RB3 is operated. When practicing with only the left hand, the fourth radio button RB4 is operated. When practicing with both hands, the fifth radio button RB5 is operated. The operation panel 6 accepts the selection of the body part to be used in practice.") . Regarding claim 15 , Kotsuji discloses a computer-implemented information processing method comprising the features of claim 1 as discussed above. Kotsuji further discloses that determining the target part includes determining the target part based on player information regarding the first player (Kotsuji ¶0037: "The first setting screen 65a also has a third radio button RB3, a fourth radio button RB4, and a fifth radio button RB5. Any one of the third radio button RB3 to the fifth radio button RB5 is selected. When practicing with only the right hand, the third radio button RB3 is operated. When practicing with only the left hand, the fourth radio button RB4 is operated. When practicing with both hands, the fifth radio button RB5 is operated. The operation panel 6 accepts the selection of the body part to be used in practice.") . Regarding claim 16 , Kotsuji discloses an information processing system comprising: at least one memory configured to store instructions (Kotsuji ¶0020: "The memory unit 5 includes a RAM 5 1 , a ROM 52, and a storage 53.") ; and at least one processor configured to execute the instructions (Kotsuji ¶0020: "The control unit 4 controls the operation of the multifunction peripheral 100. The control unit 4 includes a CPU 4a and an image processing unit 4b.") to: determine, based on musical instrument information indicative of a musical instrument (Kotsuji ¶0018: "The photographing unit 1 photographs (video and audio) the playing part of a person practicing a musical instrument.") , a target part of a body of a first player (Kotsuji ¶0018: "The playing part is the part of the body that comes into contact with the instrument 3.") , the first player playing the musical instrument indicated by the musical instrument information (Kotsuji ¶0018: "The playing part is the part of the body that is moved to produce sound from the instrument 3 and to change the sound produced.") ; and acquire image information indicative of imagery of the determined target part (Kotsuji ¶0018: "When the instrument 3 is a piano, the playing parts are the hands (fingers). The photographing unit 1 is installed in a position where it can photograph the hand (fingers).") . Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 5-6, 10, and 13 are rejected under 35 U.S.C. 103 as unpatentable over Kotsuji in view of Dorfer et al. ("Towards Score Following in Sheet Music Images," https://arxiv.org/pdf/1612.05050, December 15, 2016, retrieved February 20, 2026), hereinafter Dorfer. Regarding claim 5 , Kotsuji discloses a computer-implemented information processing method comprising the features of claim 3 as discussed above. Kotsuji does not explicitly disclose that identifying the musical instrument information includes: inputting the related information into a trained model, the trained model having been trained to learn a relationship between training-related information and training-musical-instrument information, the training-related information being related to the musical instrument, and the training-musical-instrument information being indicative of a musical instrument specified from the training-related information; and identifying, as the musical instrument information, information output from the trained model in response to the related information. However, Dorfer teaches or suggests that identifying the musical instrument information includes: inputting the related information into a trained model (Dorfer abstract: "It consists of an end-to-end multi-modal convolutional neural network that takes as input images of sheet music and spectrograms of the respective audio snippets") , the trained model having been trained to learn a relationship between training-related information and training-musical-instrument information (Dorfer § 2.1: "The model takes two different input modalities at the same time: images of scores, and short excerpts from spectrograms of audio renditions of the score") , the training-related information being related to the musical instrument (Dorfer § 3.2: "We select the first track of the midi files (right hand, piano) and render it as sheet music using Lilypond.") , and the training-musical-instrument information being indicative of a musical instrument specified from the training-related information (Dorfer § 3.2: "We synthesize the midi-tracks to flac-audio using Fluidsynth 4 and a Steinway piano sound font.") ; and identifying, as the musical instrument information, information output from the trained model in response to the related information (Dorfer § 4.2: "As a final point, we report on first attempts at working with 'real' music. For this purpose one of the authors played the right hand part of a simple piece (Minuet in G Major by Johann Sebastian Bach, BWV Anhang 114) – which, of course, was not part of the training data – on a Yamaha AvantGrand N2 hybrid piano") . It would have been prima facie obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to have modified the computer-implemented information processing method of Kotsuji by adding the trained model of Dorfer to precisely link a performance to its respective sheet music (Dorfer § 1). Regarding claim 6 , Kotsuji (in view of Dorfer) teaches a computer-implemented information processing method comprising the features of claim 5 as discussed above. Dorfer further teaches or suggests that the related information and the training-related information each indicates sounds emitted from the musical instrument (Dorfer § 2.1: "The model takes two different input modalities at the same time: images of scores, and short excerpts from spectrograms of audio renditions of the score") ; and the training-musical-instrument information indicates, as the musical instrument specified from the training-related information, a musical instrument that emits the sounds indicated by the training-related information (Dorfer § 3.2: "We synthesize the midi-tracks to flac-audio using Fluidsynth 4 and a Steinway piano sound font.") . Regarding claim 10 , Kotsuji discloses a computer-implemented information processing method comprising the features of claim 9 as discussed above. Kotsuji does not explicitly disclose that determining the target part includes: inputting input information into a trained model, the input information including the musical instrument information and the sound information, the trained model having been trained to learn a relationship between training-input information and training-output information, the training-input information including training-musical-instrument information and training-sound information, the training-musical-instrument information being indicative of the musical instrument, the training-sound information being indicative of sounds emitted from the musical instrument indicated by the training-musical-instrument information, the training-output information being indicative of a target part of a body of a second player, the second player playing the musical instrument indicated by the training-musical-instrument information, and the musical instrument indicated by the training-musical-instrument information emitting the sounds indicated by the training-sound information; and determining the target part based on output information output from the trained model in response to the input information. However, Dorfer teaches or suggests that that determining the target part includes: inputting input information into a trained mode l (Dorfer abstract: "It consists of an end-to-end multi-modal convolutional neural network that takes as input images of sheet music and spectrograms of the respective audio snippets") , the input information including the musical instrument information (Dorfer § 3.2: "We select the first track of the midi files (right hand, piano) and render it as sheet music using Lilypond.") and the sound information (Dorfer § 3.2: "We synthesize the midi-tracks to flac-audio using Fluidsynth 4 and a Steinway piano sound font.") , the trained model having been trained to learn a relationship between training-input information and training-output information (Dorfer abstract: "It learns to predict, for a given unseen audio snippet (covering approximately one bar of music), the corresponding position in the respective score line.") , the training-input information including training-musical-instrument information and training-sound information (Dorfer § 2.1: "The model takes two different input modalities at the same time: images of scores, and short excerpts from spectrograms of audio renditions of the score") , the training-musical-instrument information being indicative of the musical instrument (Dorfer § 3.2: "We select the first track of the midi files (right hand, piano) and render it as sheet music using Lilypond.") , the training-sound information being indicative of sounds emitted from the musical instrument indicated by the training-musical-instrument information (Dorfer § 3.2: "We synthesize the midi-tracks to flac-audio using Fluidsynth 4 and a Steinway piano sound font.") , the training-output information being indicative of a target part of a body of a second player (Dorfer § 3.2: "We select the first track of the midi files (right hand, piano) and render it as sheet music using Lilypond.") , the second player playing the musical instrument indicated by the training-musical-instrument information (Dorfer § 4.2: "As a final point, we report on first attempts at working with 'real' music. For this purpose one of the authors played the right hand part of a simple piece") , and the musical instrument indicated by the training-musical-instrument information emitting the sounds indicated by the training-sound information (Dorfer § 4.2: "As a final point, we report on first attempts at working with 'real' music. For this purpose one of the authors played the right hand part of a simple piece (Minuet in G Major by Johann Sebastian Bach, BWV Anhang 114) – which, of course, was not part of the training data – on a Yamaha AvantGrand N2 hybrid piano") ; and determining the target part based on output information output from the trained model in response to the input information (Dorfer § 4.2: "As a final point, we report on first attempts at working with 'real' music. For this purpose one of the authors played the right hand part of a simple piece (Minuet in G Major by Johann Sebastian Bach, BWV Anhang 114) – which, of course, was not part of the training data – on a Yamaha AvantGrand N2 hybrid piano and recorded it using a single microphone. In this application scenario we predict the corresponding sheet locations not only at times of onsets but for a continuous audio stream (subsequent spectrogram excerpts).") . It would have been prima facie obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to have modified the computer-implemented information processing method of Kotsuji by adding the trained model of Dorfer to precisely link a performance to its respective sheet music (Dorfer § 1). Regarding claim 13 , Kotsuji discloses a computer-implemented information processing method comprising the features of claim 11. Kotsuji further suggests the second player playing the musical instrument in accordance with the musical score indicated by the training-musical-score information (Kotsuji ¶0009: "the playing parts of a musical instrument practitioner playing the piece of music corresponding to the musical score data.") , and the musical instrument being capable of emitting the sounds indicated by the training-sound information (Kotsuji ¶0080: "The synthesized data 9 may include musical sounds. When the composite data 9 is played back, the control unit 4 may simultaneously play back the performance of the musical instrument learner from the speaker 64") . Kotsuji does not explicitly disclose that determining the target part includes: inputting input information into a trained model, the input information including the sound information and musical-score information, the musical-score information being indicative of a musical score, the trained model having been trained to learn a relationship between training-input information and training-output information, the training-input information including training-sound information and training-musical-score information, the training-sound information being indicative of sounds emitted from the musical instrument, the training-musical-score information being indicative of a musical score, the training-output information being indicative of a target part of a body of a second player; and determining the target part based on output information output from the trained model in response to the input information. However, Dorfer teaches or suggests that determining the target part includes: inputting input information into a trained model, the input information including the sound information and musical-score information (Dorfer abstract: "It consists of an end-to-end multi-modal convolutional neural network that takes as input images of sheet music and spectrograms of the respective audio snippets") , the musical-score information being indicative of a musical score (Dorfer abstract: "takes as input images of sheet music ") , the trained model having been trained to learn a relationship between training-input information and training-output information (Dorfer abstract: "It learns to predict, for a given unseen audio snippet (covering approximately one bar of music), the corresponding position in the respective score line.") , the training-input information including training-sound information and training-musical-score information (Dorfer § 2.1: "The model takes two different input modalities at the same time: images of scores, and short excerpts from spectrograms of audio renditions of the score") , the training-sound information being indicative of sounds emitted from the musical instrument (Dorfer § 3.2: "We synthesize the midi-tracks to flac-audio using Fluidsynth 4 and a Steinway piano sound font.") , the training-musical-score information being indicative of a musical score (Dorfer § 3.2: "We select the first track of the midi files (right hand, piano) and render it as sheet music using Lilypond.") , the training-output information being indicative of a target part of a body of a second player (Dorfer § 3.2: "We select the first track of the midi files (right hand, piano) and render it as sheet music using Lilypond.") ; and determining the target part based on output information output from the trained model in response to the input information (Dorfer § 4.2: "As a final point, we report on first attempts at working with 'real' music. For this purpose one of the authors played the right hand part of a simple piece (Minuet in G Major by Johann Sebastian Bach, BWV Anhang 114) – which, of course, was not part of the training data – on a Yamaha AvantGrand N2 hybrid piano and recorded it using a single microphone. In this application scenario we predict the corresponding sheet locations not only at times of onsets but for a continuous audio stream (subsequent spectrogram excerpts).") . It would have been prima facie obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to have modified the computer-implemented information processing method of Kotsuji by adding the trained model of Dorfer to precisely link a performance to its respective sheet music (Dorfer § 1). Claim 7 is rejected under 35 U.S.C. 103 as unpatentable over Kotsuji in view of Dorfer and further in view of Arandjelovi et al. "Objects that Sound," (https://arxiv.org/pdf/1712.06651, July 25, 2018, Retrieved February 20, 2026), hereinafter Arandjelovi. Regarding claim 7 , Kotsuji (in view of Dorfer) teaches a computer-implemented information processing method comprising the features of claim 5 as discussed above. Kotsuji (in view of Dorfer) does not explicitly disclose that the related information and the training-related information each indicates imagery of the musical instrument, and the training-musical-instrument information indicates, as the musical instrument specified from the training-related information, a musical instrument represented by the imagery indicated by the training-related information. However, Arandjelovi teaches or suggests that the related information and the training-related information each indicates imagery of the musical instrument (Arandjelovi § 3.1: "The architectures are trained on the AudioSet-Instruments train-val set, and evaluated on the AudioSet-Instruments test set described in Section 2") , and the training-musical-instrument information indicates, as the musical instrument specified from the training-related information, a musical instrument represented by the imagery indicated by the training-related information (§ 4.1: "The ability of the network to localize the object(s) that sound is demonstrated in Figure 5. It is able to detect a wide range of objects in different viewpoints and scales, and under challenging imaging conditions. A more detailed discussion including the analysis of some failure cases is available in the figure caption. As expected from an unsupervised method, it is not necessarily the case that it detects the entire object but can focus only on specific discriminative parts such as the interface between the hands and the piano keyboard. ") . It would have been prima facie obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to have modified the computer-implemented information processing method of Kotsuji (as modified by Dorfer) by adding the trained model of Arandjelovi to focus on specific discriminative parts of an instrument such as the interface between the hands and the piano keyboard (Arandjelovi § 4.1). Claim 8 is rejected under 35 U.S.C. 103 as unpatentable over Kotsuji in view of Takijiri et al. (JP 2014167576 A, September 11, 2014), hereinafter Takijiri. Regarding claim 8 , Kotsuji discloses a computer-implemented information processing method comprising the features of claim 3 as discussed above. Kotsuji does not explicitly disclose that identifying the musical instrument information includes identifying, as the musical instrument information, reference-musical-instrument information associated with the related information by referring to a table indicative of associations between reference-related information related to the musical instrument and the reference-musical-instrument information indicative of the musical instrument. However, Takijiri teaches or suggests that identifying the musical instrument information includes identifying, as the musical instrument information, reference-musical-instrument information associated with the related information by referring to a table indicative of associations between reference-related information related to the musical instrument and the reference-musical-instrument information indicative of the musical instrument (Takijiri ¶0030: "Here, the image recognition process performed when setting the performance part in FIG. 3 (a) will be described with reference to FIGS. The control device 20 has the instrument image table shown in FIG. 4 stored in the ROM 47, the CD-ROM 53, or the like. In this instrument image table, exterior images of various instruments that can be used in minus-one performances are stored in association with the names of the respective instruments... In this embodiment, the karaoke device 10 performs image recognition processing to identify the type of instrument by comparing the external image of the instrument captured by the camera 25 with which external image in the instrument image table it is closest to.") . It would have been prima facie obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to have modified the computer-implemented information processing method of Kotsuji by adding the table of Takijiri to identify the type of instrument captured by the camera (Takajiri ¶0030). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to FILLIN "Enter examiner's name" \* MERGEFORMAT PHILIP SCOLES whose telephone number is (703)756-1831. The examiner can normally be reached FILLIN "Work schedule?" \* MERGEFORMAT Monday-Friday 8:30-4:30 ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, FILLIN "SPE Name?" \* MERGEFORMAT Dedei Hammond can be reached on FILLIN "SPE Phone?" \* MERGEFORMAT 571-270-7938 . The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PHILIP G SCOLES/ Examiner, Art Unit 2837 /DEDEI K HAMMOND/ Supervisory Patent Examiner, Art Unit 2837